Subject: Re: Oversimplifications in HtN -- Philosophy and biology
From: "Eric S. Raymond" <>
Date: Sun, 19 Sep 1999 01:36:32 -0400

Ian Lance Taylor <>:
>    Date: Mon, 30 Aug 1999 15:51:47 -0400
>    From: "Eric S. Raymond" <>
>    I will argue that I am (necessarily) simplifying, but not
>    *over*-simplifying.  I offer a precise definition: a model is
>    oversimplified when it is unable to predictively capture features
>    of the behavior under discussion.
> That definition omits correctness.  I think a model is oversimplified
> if it does not correspond to reality.  The fact that it is possible to
> construct a simple predictive model does not imply that the model is
> correct.  It may merely be successful in a limited domain, or for a
> limited time.

(Uh oh, you've done a bad thing now.  You triggered my analytic-philosophy
circuits.... :-))

Nope, nope, this won't fly.  You have a bootstrapping problem with
what "corresponding to reality" actually means, here -- you can't know
what "reality" is until you already have a confirmation theory, which
is what you're implicitly constructing.  Nor can you ever know that a
theory is correct over more than a "limited domain, or for a limited
time", because you don't have the capacity to observe the entire
universe over all time.  Your implied definition of "correctness" is,
therefore, not satisfiable.

Don't try to do metaphysics and confirmation theory at the same time.
This is one of the classic ways to fuck up, philosophically speaking.
If you want to keep it clean, stay humble -- do phenomenology first,
then go from that to confirmation theory, and only after *that* is it
appropriate to start throwing around big loaded metaphysical terms like

> For example, to consider a perhaps controversial issue, it's fairly
> easy to construct a successful predictive model which shows that the
> greater economic success of white males in U.S. society is caused by
> their genetically greater intelligence.  In fact, this explanation has
> been advanced seriously.  However, there have also been persuasive
> counter-arguments by a number of people.  The fact that this argument
> predicts successfully does not imply that it corresponds to reality.

Ah, but it does create what lawyers call a `rebuttable presumption'.

The greatest insight of the last two centuries in philosophy, and
arguably the greatest in its entire history, is that we have *nothing
else to go on* but predictive power; all other definitions of
`correct' collapse into circularity, bullshit, or "Because I say so!".
In a practical sense, this insight underlies all experimental science,
where it is basic that every `truth' assertion that does not 
translate into a testable predictive claim is just noise.

The Murray-Hernstein hypothesis can only be rejected if we have other
theories of equal or greater predictive power to cover the same facts.
(The question of whether we in fact have such alternatives is beyond
the sope of this discussion; I'm making no claim either way here.)

There are some heuristics you can use to chose between hypotheses of
equal predictive power -- notably, Occam's Razor (choose the simpler)
and consilience (choose the partial theory which has the most in
common with other well-confirmed partial theories in adjacent domains).
But when a model predicts *better* than the alternatives, that gives
us the only warrant we can ever have for asserting that it is `true'.

> In the ``Homesteading the Noosphere'' paper, you don't offer any other
> explanations for the behaviour you are describing. 

Untrue.  Did you miss the discussion of the craftsman's urge?

>                                                   I don't find the
> explanation in that paper convincing, because it appears to argue that
> the behaviour results from one particular drive, when I believe that
> any reasonable description of human behaviour must account for many
> drives, and for a wide range of motivations.

Absolutely.  Primatologists say ``All interesting behaviors are
overdetermined'', that is have more than one sufficient explanation.
But conceding this, we can still ask which of the available sufficient
explanations gives us the most predictive leverage for the least 

We can then *use* that model in (as John Cowan says) an `as-if' way,
without necessarily seeking to banish the other explanations from
thought (and in fact HtN specifically recommends that we *not* banish
the `craftsman's urge' explanation from thought).  Which is my
attitude; perhaps others have oversimplified the matter.

> Hence, I think that when you assign all aspects of the behaviour you
> are describing to a competition for status, I think you are
> oversimplifying.  No doubt you are correctly describing the motivation
> of some people.  However, I do not think you are correctly describing
> the motivation of most people.

We have now moved from a question of method and philosophy to a
question of fact, to be decided by experiment and investigation.

I don't claim to have definitive evidence -- but it is a fact that
lots and lots and *lots* of people in the culture recognize themselves
in the reputation-game model.  You seem to be in a fairly small minority
in actively rejecting it.

That's only significant to the extent you think most hackers
introspect accurately about their own behavior, of course.  Are you
willing to argue that most hackers get it wrong?  In principle you could
be right -- but I must beg leave to doubt it.

> Of course I play the reputation game. 

You play it well, too.  You're on my mental list of hackers worthy of
respect and cooperation, and have been for many years.  That is part
of the reason we are having this conversation.

>                                      That's a consciously chosen
> strategy to make people take me seriously, so I don't have to
> laboriously qualify myself every time I enter a new discussion about
> software.  On the net, nobody knows you're not a dog; it helps to have
> something to point to.  That's why I named my UUCP program after
> myself: it was an intentional marketing tactic (I did solicit other
> names, but the only one I liked--GNUUCP--was already being used by
> John Gilmore's program).
> Somewhat similarly, at my current company I plan to do what I can to
> get the product mentioned in the trade press, and to win awards.
> That's not because I care about the trade press, which I don't even
> read, or because I care about awards, which I think are PR nonsense.
> It's a means to an end, the end being increasing sales of the product.
> The reputation game is also a means to an end, the end being
> simplifying my communication across the essentially anonymous medium
> of the net.

You've just made the case for John's "as if" interpretation better than
I could possibly have done.
>    The fact that you don't consciously experience the reputation-game 
>    incentive is interesting, but not surprising to me.  I don't normally
>    experience it consciously myself.  Nevertheless, I play the game because
>    that's what I've *learned to do* in order to function in the culture.
> Yes, but my reading of your paper is that you are claiming that my
> primary motivation is the ``reputation-game.''  In the above paragraph
> you appear to be suggesting that you understand my motivations better
> than I do myself.

No; all I claim is that I can *model your behavior* better than you
could before you read my stuff and introspectively checked it against
your experience.  Whether I do that by correctly describing your
psychology or by referring to a sufficient "as-if" model of learned
behavior is not specified by my claim -- intentionally so.

Let's say that a theory of human behavior is psychologically or
"strongly" correct of people for whom it matches their introspective
experience.  Let us say it is "weakly" correct if it fails to match
their introspective experience but correctly predicts culturally
acquired behaviors.

For you, the "reputation game" theory of HtN is only weakly correct.

>                  I'm a fairly introspective person, and I think I
> have some understanding of my own motivations.  If you want to
> maintain this argument in full seriousness, I think your theory is
> getting perhaps a trifle close to being unfalsifiable.

OK, let me rephrase my position with a view to making it clearly
falsifiable.  I claim that the "reputation game" theory

(1) is weakly correct for almost all hackers

(2) is strongly correct for a solid majority of hackers

(3) accurately predicts the social norms of the hacker culture.

Note that (1) and (3) really are separable.  You can falsify my
theory by demonstrating that any of these three claims is untrue.
If you agree that this is possible in principle, we're done.  
If you don't, perhaps we can develop lower-level consequences 
that are testable.

>[Long discussion of an alternate explanation of the three taboos.]

This is very good stuff.  If you don't object, I'm going to fold this
into the next version of HtN.  You will be credited.

> Do I think that the explanations I've given above are the correct
> ones?  No, I don't.  I don't think there are any simple single correct
> explanations.  I don't think human behaviour works that way.  Human
> behaviour is complex, and requires complex explanations.  I think my
> arguments are part of a correct and complete explanation.
> That's why I think your paper is an over-simplification.  I don't
> think your explanations are completely wrong.  In fact, For some
> people they may be completely correct.  However, I do think that there
> are many more motivations than you are considering.

What you have shown is just that the model in HtN can be weakly
correct without being strongly correct at all.  I don't have a problem
with that.  Whether the model is "(2) strongly correct for a solid
majority of hackers" is an empirical question, not a philosophical or
ethical one.

I don't think we end up disagreeing very much, then.  Or, anyway, I
wouldn't be very upset by having HtN falsified *in that way*.  It
would disturb me much more if claims (1) and (3) turned out to be

> I was perhaps unclear.  Here I'm responding to this quote from your
> paper:
> ``Anybody who has ever owned a dog who barked when strangers came near
> its owner's property has experienced the essential continuity between
> animal territoriality and human property.''
> I'm arguing that the existence of dogs which protect human property is
> not an argument for any sort of continuity between animal
> territoriality and human property.  That is like saying that the
> existence of guide dogs for the blind shows the essential continuity
> between the desire in dogs and humans to help others who are less
> capable than ourselves.

Careful, Ian, that argument will turn and bite you :-).  In fact, dogs
almost certainly *do* have such a desire that we *do* exploit, for
exactly the same reasons human beings have such a desire.  Kin
selection and behaviors translocated from nurturing the young!

> To my knowledge, these type of arguments has never been made
> successfully for complex human behaviour.  Why not?  Because as humans
> learn about the argument, or invent it for themselves, they are able
> to reflect upon their own behaviour and adjust it according to their
> perceived advantage.  Thus human behaviour automatically introduces
> second-order effects, and indeed, as people reflect further, we find
> third-order effects, fourth-order effects, etc.  Human behaviour can
> thus be described as a chaotic system, in which the output conditions
> can not be reliably predicted from the input conditions.

Oh?  Show me a human culture in which parents routinely eat their young.

The remainder of the refutation is left as an exercise ;-).
		<a href="Eric">">Eric S. Raymond</a>

Under democracy one party always devotes its chief energies
to trying to prove that the other party is unfit to rule--and
both commonly succeed, and are right... The United States
has never developed an aristocracy really disinterested or an
intelligentsia really intelligent. Its history is simply a record
of vacillations between two gangs of frauds. 
	--- H. L. Mencken