Subject: Re: near/medium future digital media economics
From: "Ben Tilly" <btilly@gmail.com>
Date: Fri, 19 May 2006 16:38:34 -0700

 Fri, 19 May 2006 16:38:34 -0700
On 5/19/06, Thomas Lord <lord@emf.net> wrote:
> The Odlyzko/Tilly arguments against Metcalfe's law make a
> certain amount of sense for telegraphs, telephones, and
> scientific journals.  One user of the phone is greatly
> interested in being able to call people in his immediate region
> or close associates, friends and family.  It's less interesting
> to be able to call others less directly associated.  This
> locality property makes it reasonable to guess that the
> distribution of interesting-to-call people follows a power law:
> log N of the of total number of network subscribers are valuable
> to have in the network.  Empirically, in a set of N scientific
> journals, log N are worth reading, to a given user, presumably
> because of similar locality laws (this time "locality of
> intellectual interest").

Exactly.

> Well, let's consider a communication network built on a very
> specific kind of collaborative document: GNU/Linux software.
> The software has a modular, hierarchically layered structure.
> There are many key components that *every* user uses, directly
> or only slightly indirectly: the kernel, the shell utilities,
> the compilers, the debugging tools, the revision control systems
> and bug trackers, and so forth.  Let's call those components
> that every user benefits from the "foundational components".
> Even if there are K programs overall, and each user is
> interested in at most log K of those programs, still every user
> benefits from these foundational components.  Improvements to
> the foundational components raise the value of O(K) programs.

Remember that I said you have to differentiate between value at a
point of time and value over time?  In particular collaborating over a
slightly better medium means that exponential progress follows a
slightly higher exponent, leading to an exponential incremental
improvement over time.  Your arguments here about how feedback works
on itself on a large scale are addressing the second type of value.
However you're focussed on the mechanics of the improvement, and are
missing that the usual pattern is exponential improvement in whatever
metric is agreed to be of interest.

> There are three things to note:
>
> First: there is no shortage of potential, practical improvements
> to make to the foundational programs.  We are limited mostly by
> a shortage of people working on them.
>
> Second: Even if a person is only using and/or improving one of
> the K programs overall, because each program involves using the
> foundational programs, they often wind up contributing back *to
> the foundational programs*.  In short, there is no other choice
> than when contributing to the free software network *but* to
> help *all* users of the network.
>
> Third: The quality of the foundational components is at least
> co-dominant in determining the value of the entire collection of
> free software, from essentially all user's perspectives.  You
> can compare, for example, the value of the GNU shell utils
> before and after a viable kernel was created, or before and
> after each significant improvement is made to the kernel.  Yes,
> it is true that not every user executes every improvement but
> every user does benefit when an improvement expands the overall
> contributor network.

I'm not disputing this, but I'll qualify it by saying that very few
users participate.  However the feedback is such that even a very
small fraction still leads to tremendous value over time.  This is the
basic open source value proposition.

> So, Tiemann's claim that Metcalfe applies to free software is
> pretty plausible.  To a first approximation, ever new member to
> the network benefits every other member of the network equally.
> That's just the nature of software, at least as we best know how
> to build it.

Tiemann's claim that Metcalfe's Law applies is either optimistic or
pessimistic, depending on how you look at it.

At a given point of time he's optimistic, I really am not interested
in nor do I care about what other people are doing.

Over time you get a better exponential improvement, which looks more
like Reed's Law than Metcalfe's.

You may wonder how to reconcile those two facts.  That is not so hard
- it turns out that rational people will discount the value of future
events exponentially, so that my expected value from future
improvements can be finite.  Getting the calculations right can be
hard, but the sum should always work out to be finite.

> Now, what about a collaborative "hyperlibrary" of all digital
> media?  Will it have foundational components with similar
> properties?
>
> I would say the evidence points towards "yes".  For example,
> when user access and linking patterns to a hyperlibrary are
> collected and analyzed, a global improvement is made to indexing
> and ease of access.
>
> Another example is annotations.  You may well be directly
> interested in only "log N" of the reviews contributed to Amazon
> or the changes submitted to Wikipedia, but then what is the next
> step?  All of those reviews and changes can be collectively
> analyzed, new structure discovered among them, and this can be
> applied in turn to increase the utility to you of those books
> and articles you *are* interested in.
>
> More specifically: hypothesize a subset of the community of
> contributors to Wikipedia who are interested in articles about
> scientific laws.  Of all N Wikipedia users at any one time, for
> some constant K, this subset is some O(N/K) users.
> Periodically, an editor for one of these articles has a great
> idea about a new kind of table that every such entry would
> benefit from or a new idea about a kind of cross linking that it
> would be useful if every such article used it.  These insights
> spread rapidly through the network and are implemented at very
> low marginal cost to each of the N/K users.  They spread rapidly
> because O(N/K) users have voted with their feet and all agree
> that the utility of each article will then go up by more than
> that marginal cost.  Once again, Metcalfe applies.

Over time we find exponential improvement in many kinds of products
that are readily improvable.  This has no connection to Metcalfe's
Law.

> Telephones have nearly nothing like these properties (until they
> become data lines!).  Aside from freakish moments of history, it
> is hard to imagine many situations where a single call quickly
> raises the value of *all* calls.  But in the case of a
> hyperlibrary, such shifts can be produced almost systematically.
>
> Scientific journals almost have a little of these properties but
> not quite.  Of course, breakthroughs in the foundations of
> science with global implications not only happen but can be
> systematically searched for.  Yet, in static journals, what
> happens when a breakthrough comes along?  You don't have O(N/K)
> users going back and adding an annotation to every historic
> article which relates to but predates the breakthrough.
> Instead, people have to publish entirely new articles, at very
> high marginal cost.

People would publish those new articles anyways, they just all
reference the new paper.  It might help your intuition to examine,
say, the effect of the Fast Fourier Transform on an existing field of
data analysis.  (Most of the field was devoted to transforms that had
some good property that Fourier did, but were easier to calculate.
Needless to say this all became obsolete extremely rapidly.)

But even so, you'd be learning about the wrong thing.  The real
question is the value of a network at a given point in time versus a
competing network.  That question is what will drive usage, and
therefore the existence (or not) of business opportunities.  And if
Metcalfe does not apply in that situation (and I think it will not),
then the business landscape will be very different than if it does.

> Telephones and journals foo.  For free software -- and for a
> hyperlibrary -- the reliability of global improvements from
> low-cost, incremental, individual communications means that
> Metcalfe, in this case, is right.
>
> -t
>
> p.s.: There are separate arguments to be made that, after all,
> Metcalfe is *also* right for telephones and telegraphs and
> scientific journals, but I have not included those here.

Don't bother making them.  The real life behaviour of companies
involved in telephony and scientific journals doesn't match what
Metcalfe's Law projects it should have been.  Therefore there is
direct empirical evidence that Metcalfe's Law doesn't really work for
them.

I singled out telegraphs because it is a special case.  If I put up a
telegraph station in New York, and another in Boston, and a line
between them, then I've now connected everyone in New York to everyone
in Boston.  Therefore no matter what the network effects may be, it is
easy for a competitor to create a large network from scratch.

This is not hypothetical, in several countries at several times an
upstart was able to arise and challenge a dominant monopoly for
periods of decades.  Which empirically demonstrates that barriers to
entry cannot have been that bad in telegraphy.  By contrast in
land-based telephones the only way that monopolies have been
successfully challenged is when the legal system imposed on the
monopolist - either by breaking it up or by requiring that it give its
competitors various kinds of assistance, including interconnection.

So network effects can be observed to matter in telephony, but did not
matter in telegraphy.

Cheers,
Ben