This is tragically accurate. Using http://www.jcmit.com/memoryprice.htm I see that RAM was about £30/Mb in 1996, meaning that the Firefox process I'm typing this in which is using 400Mb of RAM would have cost over ten thousand dollars.
(I remember upgrading my PC to 4Mb so I could play DOOM round about then)
By 1998, there were way more than five back end servers. I don't remember exactly how many 8400s we had -- 20? 40? Something in that range. They'd gone up to 12 GB of RAM apiece, which came on boards the size of a fairly large baking sheet. The servers were as big as a refrigerator.
The primary datacenter was a floor above PAIX in the middle of downtown Palo Alto. Pricy server space.
Oft-forgotten bit of history: Elon Musk's first big success was selling Zip2 to AltaVista (under Compaq).
The Alphas of that age were great servers. It's not really surprising that they used an AlphaServer for the indexer, and just one of them. Clustering was still pretty exotic technology at the time (although IIRC VMS already had it). At this time (which was the time I started using altavista, because it was "fast") our machine room had a few alphaservers in it, and the managers and I were always arguing over whether Intel boxes would replace Alphas (and HP, and IBM, and SGI) as high performance computing servers.
When Google came along I proudly add their web search form to my home page with the note "They use linux" because I felt it validated my belief that Linux would become the server platform of choice.
AltaVista was originally a DEC project, which was the biggest reason for using Alphas. It's a search engine and a marketing tool! But I agree, the speed was great. There was a spare TurboLaser 8400 which was occasionally used for SETI@Home and usually lived in the top five individual servers when it was looking for aliens.
Yep, back in those days I loved DEC and AltaVista and Alphas, although they were out of my budget, so I purchased linux machines. In the lab next to me was a bunch of old DEC-heads who upgraded their alphaservers to Tru64 and then TruCluster. I have a great deal of respect, especially for late-stage Alphas such as clusters of GS1280s running TruCluster. They were engineered for throughput and reliability and I know many important loads ran on them.
I don't think that DEC ever really managed to market AltaVista and Alpha to the extent it could have in the rapidly growing period of the Internet.
I don't think that DEC ever really managed to market AltaVista and Alpha to the extent it could have in the rapidly growing period of the Internet.
In 1996 they had some sort of program to provide discounted hardware to web startups, I worked for a company that took advantage of it, and while I was there started adding their great high availability system.
This revealed perhaps their biggest problem, a legacy system, you might say, of configuring and selling hardware, the very process of buying it from them was difficult and required acquiring all sorts of domain knowledge.
Some years earlier when they were all proud of their expert system that would correctly configure a system, I compared it to buying a Sun workstation where the most difficult decision was choosing your preferred keyboard (e.g. old school Sun/UNIX vs. PC layout, and language), and the right power cord for you country.
I think they largely failed to capitalize on the dot.com boom, especially as COMPAQ was fumbled just about everything they were doing when they bought DEC in 1998, and when the dot.com bubble went bust....
I really hope this whole linking-to-an-image thing is just a fad. Fascinating information; it's super annoying not to be able to copy bits out of it or zoom in without blurring.
AltaVista was my favourite search engine when I was a teenager... I remember how back in the day I refused to change to that newcomer named Google... Amazed to know it was running in only 5 servers with those specs! It says it all about how many people was connected to the net in the early days of commercial web. I'd love to know the concurrent connections they had to deal with...
I believe it was in C, written mostly by Mike Burrows (also known for the Burrows-Wheeler transform). He went on to work at Google, where he has had an extremely productive career building some of the most fundamental technologies at google (Chubby) as well as their high performance primitives (https://www.chromium.org/developers/lock-and-condition-varia...).
He is one of the few people I've worked with who is a veritable genius programmer. He understands atomic locking at the physics level.
Perhaps a lesson to learn here is consider who won. While AltaVista focused on physical details (number of servers, resource usage, etc...) they missed the boat on the bigger picture - how to significantly improve search engine results. Brin&Page did, filed a couple of patents, secured investment, registered google.com, scaled, etc and won.
Perhaps this is another example of what you measure is what you become. I realize it's just one email and we're missing all sorts of context, but to me, this email seems to boast at their technical prowess of being able to pull off the scaling aspects. Perhaps this email indicates measuring physical resources was a higher priority than DRASTICALLY improving search engine results.
Maybe a bit more nuanced than that. Google gave better search results for mere mortals, there was a period during which I used AltaVista because it's old school search inputs allowed me to better find what I wanted (and I miss a lot of that nowadays, especially proximity searching).
But it was definitely branded to show off DEC's hardware.