Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I remember using Mosaic for the first time and thinking that it sucked ass in comparison to gopher - so much less information available, and it was very hard to just browse the hierarchy to see what was on a server.

On the other hand, I kind of miss Mosaic's ability to easily turn off image loading. There's more than a few sites that'd be improved by using that feature.



UBlock Origin in advanced mode allows easy blocking of images, both globally and for specific sites. However, in Firefox this only works for the normal view; in Reader View and in the Page Info dialog box it will be ignored.


The HTTP web really took off with good search engines. Before that it was much more curated on other methods. Even ftp!


Everyone forgets the "directory" era. There was a time when Yahoo was the primary way to find things on the web. Getting your new website listed there was like winning the SEO wars.


I don't remember the timeframes exactly but at one point, I had a local "home page" on my Unix workstation that was basically a graphic and the links I was most interested in. Yahoo was probably there. Search engines were coming in but I mostly bounced around until Google came along. AltaVista was early on.


Back in the day, there were hosts on the internet that let you browse their entire filesystem via ftp. This was in the days before shadow password files were a thing, too. I'm too upstanding of a citizen to have done so, but a friend of a friend once spent a couple weeks of computer time running crack on those things and managed to gain shell access to some of the machines.


If you had such a ftp service, there was a good chance that eventually you'd end up unwittingly serving porn out of a twisted little maze of nested directories with embedded special characters an the like.

This killed a fair few useful-but-not-important sites.


At one point a well-known FTP server would let you access it with Samba, complete with R/W access to certain directories. I had an Amiga with a small hard drive, a modem, and the "VMM" virtual memory program. Experiments led to me creating a 2GB sparse file on the FTP server, mounting that server as a volume, and pointing VMM at that sparse file. Voila! An Amiga with 2GB of RAM, so long as you didn't mind swapping at about 5KB per second.

That was completely useless, but it was great fun to get working at all. I hope my sparse file was actually sparse on the server, too.


Tangentially, it turns out one could use late 70s technology to store 5gb of data on a VHS tape: https://www.youtube.com/watch?v=xSnrQBfBCzY


And of course there was an Amiga package for it: http://www.hugolyppens.com/VBS.html


> An Amiga with 2GB of RAM

Huh, what did the Amiga's memory model look like? Could you construct a pointer to 2 GB different locations? Did it have segments like the 8086 or something?


It had a 32-bit address bus, so that was a nice, flat 2GB of directly addressable locations.

Edit: You might've been asking a different question. Toward the end, lots of Amigas had MMUs, either as a separate chip or built in to the CPU. VMM and similar programs used the MMU to implement paging.


Those are both interesting answers, and I didn't really know anything about the Amiga's architecture (other than to have imagined wrongly that it might have had 16-bit addresses). Thanks.


Not just porn, but also warez (pirated software). In fact, I'd say warez was much more common than porn, though that might be because of observer bias...


archie (ftp search engine) and jughead and veronica (gopher search engines) predated even the wwww (world wide web worm) search engine for the www




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: