Subject: Re: Microsoft may publish some source code
From: "Stephen J. Turnbull" <>
Date: Tue, 18 May 1999 13:54:04 +0900 (JST)

>>>>> "rn" == Russell Nelson <> writes:

    rn> Has anybody else seen the reporting on the Mindcraft
    rn> benchmarks, wherein untuned Linux performance turned out worse
    rn> than tuned NT on a whopping big server (four Xeon processors
    rn> and 4GB memory).

Yes.  I've seen worse work than Mindcraft's published in top economics
journals though :-P

    rn> Does anybody use such a large server in real life?

I believe that is a top-of-the-line Alpha with
2GB memory (running DEC/OSF or something like that), but httpd isn't
the only thing it runs.  I wouldn't be surprised to see a lot a
servers with that configuration here in Japan; we're woefully short of
distributed-computing-capable admins.  And it's easier to get HQ
(whether that's corporate or a government department) to pay for
hardware that requires a forklift to upgrade; we're woefully short of
MIS capable bureaucrats (let alone ones who understand academic

Not that that in itself guarantees this is a bad way to run a Web site 

    rn> In my experience servers are more likely to be a farm of
    rn> smaller machines serving files off a NFS server.  The service
    rn> scales better; you can upgrade without a forklift.

Playing the devil's advocate, theoretically it could be that you don't
see servers done that way because tuned Linux (or Unix in general) is
better than tuned NT on "normal size" servers, and Linux/Unix doesn't
scale well to big servers, so ....

I know the answer to that as well as you do; my point is just that we
just have to be more careful than the guys on the other side of the
river trying to blow up the bridge ;-)

University of Tsukuba                Tennodai 1-1-1 Tsukuba 305-8573 JAPAN
Institute of Policy and Planning Sciences       Tel/fax: +81 (298) 53-5091
What are those two straight lines for?  "Free software rules."