It might be interesting to contrast with what my own website has evolved. Discussion: https://www.gwern.net/Design
Here the idea is not so much 'links are you all need' as transclusion is all you need, in the form of dynamic popups/popins.
How can we have a ridiculous number of references? Make each one popup its metadata like an abstract. How can we have a ridiculous number of WP links for every term and concept? Pop them up, WP has an API. How can we cross-reference sections or arbitrary <div>/<span>s within a page? Make it popup. How can we cross-reference other pages without the dreaded hypermedia effect of "oh my god each page is 1 sentence long and I have to click through a thousand of them, screw you man"? Pop up with a mouse hover, effortless and fully recursive. How do we support backlinks, or indeed any arbitrary kind of metadata page about a link we may wish to create at some point? Just generate the HTML snippets and link to those snippets - popups do the rest. How do we implement tags? You'd better believe it involves simply creating a HTML page with all of the links with the 'tag' and then simply linking to it! How do we do topic-modeling with the coldstart problem of requiring heavy manual curation of all links (which is what OP suggests as a replacement)? Use topic-modeling to generate link suggestions while editing, and then link the useful ones (thereby making them visible when you edit and also generating backlinks automatically). When we can pop up anything, we can pop up raw data files like PDFs, or their syntax-highlighted versions (source code); we can even pop up pages on other domains (subject to a whitelist of having checked that X-FRAME headers don't kill it and it doesn't otherwise break horribly inside the iframe).
This is amazing and I wants it for my own site. I've been solving most of this by trying to Just Write Better and that works to an extent – it obviates the need for a lot of structure because you're building up the structure in the reader's brain as they go. Using stable references helps. Familiar readers can go "Ah he's referencing that concept again" and don't need to click the deeper link. New readers can click and find out.
But if they didn't need to click. That would rock!
Popups are modal and nested popups represent a path of navigation. Accidentally move the mouse too much and everything disappears - and if the stack was deep, good luck trying to get it back, because you lose the entire path. There's no back button or undo close tab to rescue you.
Adding explicit opening/closing would defeat much of the point of it being frictionless. It being so frictionless makes it easy to retrace your path in a split-second, which heavyweight browsing does not. (We do have explicit closing/opening for mobile popins because holding your finger in place would be ridiculous on mobile and break in many ways.) If you want a popup to stick around, you can simply 'pin' it using the pin icon, and also drag, resize, maximize etc.
I feel increasing fear doing any "frictionless" browsing, knowing that I'd need to retrace my steps if I have a muscle tick or something similar which causes the cursor to stray outside the allowable area. It's a jenga tower of stacked navigation.
So it's not actually frictionless, it actually induces mental stress the deeper in the navigation path I go. And knowing that I'd feel stress makes me not want to look at the popups at all, for fear that I find something interesting and feel that stress with a nested navigation.
I think it's an interesting experiment but it's not for me.
The problem I have with your example link is that when you hover, there is no indication that something is loading before the window pops. Is there a case for pop the window with a loader then replacing once the data has loaded?
We actually do have a loading spinner. The problem is, it was designed for the download delay, and the download is now so fast that you don't even see it. The spinner was implemented back when popups were either just short HTML snippets or PDFs, so back then, the download time ~= total load time. But after we generalized the transclusion functionality to be able to pop up arbitrary sections or pages, across this and other websites, that is no longer the case. All of the delay is in parsing and rendering the fetched annotation/document. In the case of /Lorem (which I assume is the popup everyone is complaining about since it's in the abstract), it's a deliberately long torture-test document exercising every feature in many combinations; so, it just takes that long for your browser to render it.
Said is going to try to rewrite the spinner to cover the full rendering. I have also been thinking that for links to pages, like /Lorem (rather than links to a specific section of a page, like '/Lorem#images'), the total load time may be too bad an experience and it's time to go back to creating annotations out of the page abstracts, and then having the full page popup if necessary. (I was going to do something like this anyway in order to get my essays fully integrated into the tag/link-bibliography & similar-links systems, but I was going to keep the popups as live popups; now I probably won't.)
If you really care about details you should preload sidenotes and other links before I click on them. Most people (I) don’t care about bandwidth nearly as much as I care about time.
But I do! Links are preloaded after about a second or so of hover; the delay is because I don't want to prefetch a lot of links when readers are just scrubbing over links or briefly previewing them. I've thought that it's still too aggressive since that's not a lot of time to even skim an annotation, but it's hard to know what is a good timeout there.
(I'm not sure what you mean by 'sidenotes'; what we call sidenotes are just footnotes, which are of course at the bottom of the document and don't need to be 'loaded'.)
The Instant.page JS will still prefetch on mobile, it'll just do so on mouse-down instead of waiting ~1s of hover. Not as useful, presumably, but it would be hard to do better (how does one know before the finger touches the screen?) and still is shaving milliseconds.
There are all kinds of wild assertions dismissing semantics in describing links that the author seems to just hand wave away. I went back specifically to read the part about parent semantics and the conclusion is that this can be derived from links’ text, without acknowledging that that’s not how link text is typically used. Those kinds of relationships are usually encoded in additional semantic markup or data the links are derived from.
This is an interesting attempt to reduce a problem space, but ultimately I think it ignores a lot of factors either in play where they see success or likely to cause it to falter should such reductionism be pursued. And in evidence is another major use case cited: attempts to replace hierarchical file structures have not been successful, because those hierarchies are meaningful for people who use them. This isn’t for lack of trying! Even the accompanying screenshot shows the detritus of that effort, just noise in the sidebar.
I expected some sort of implementation at the end; something like, "and here's how you implement all of these ideas in one novel framework", maybe something to do with Gemini or something, but there wasn't anything. I get that the concept of linking is important, no one would deny that, but I thought they were talking about implementing all of these things in literal hypertext links.
Assume you have your filesystem as a set of HTTP links, and you have tagging, and whatever else. How do you tell the difference between a link to a file and a link to a tag? It seems like you'd need to impose an additional structure, or type system, on top of just links.
Exactly. Whereupon you build a semantic system which has supposedly failed. But probably more haphazardly because your only primitive is links. Which, not to beat the horse I already declared dead but… is an effort pursued by great resources, and with lots of evidence it doesn’t work.
> I expected some sort of implementation at the end
I wanted to see that because as I read on I honestly had more problems understanding the concept and wanted to see the idea implemented so that I might understand. But nope :(
I have never wished so much to put a gif in a comment on HN, but I’ll do better and transcribe it. “I already am eating from the trash can all the time. The name of this trash can is ideology.”
> Let’s think about a link. Many pages can point to a single page. Many-to-one. One-to-many. So, we could achieve tagging with links by listing all backlinks to a given page. Tags are just backlinks to pages that don’t exist.
Does this make sense to anyone? This reads to me like GPT-3 wrote it, a simulacrum of a deep thought...
Yes, although it could be stated more clearly. The “pages that don’t exist” are conceptually a virtual page for each tag that links to every page that includes said tag. This is pretty obvious to anyone with some experience with contemporary note taking apps since they generate backlink pages dynamically.
you can't have a backlink without a link. you can't have a link without a page. so you can't have a backlink to a "page that doesn't exist" because without a page, there is no link.
the right way to say this is, "a tag is a special kind of page", which is far more boring (like.. didn't del.icio.us do this years ago?), but way more understandable.
In fact most of this article could be summed up as: most interfaces can be represented as a page of links.
I was thinking along similar lines. I kept thinking this isn't about links, it's about relationships and links in this article is being used as a synonym for relationships.
There is no revelation in seeing that everything in the entirety of existence, physically or conceptually is built out of related stuff. Nothing exists outside of everything else, except a parallel universe maybe, everything is derived from something.
> There is no revelation in seeing that everything [...] is built out of related stuff.
This is a major revelation, as it's not people's naive intuition. It's the basis of modern physics and category theory, the foundations of mathematic theories of the real and abstract worlds.
Well I've always looked at the world as being interconnected, like ideas are never come from the ether, they are always inspired (linked) by something.
Either way, why use "links" as the analogy, why not just call them relationships? That's what they are. We know what relationships are just as well as we know what links are so no need to pick a familiar concept and extrapolate.
You may personally feel that way, but I would argue (and now understand that I should have this whole time) that many people do not think this way.
For possibly the majority of the planet, relationships between things do not jump out like an interconnected web and therefore that’s not their “fundamental truth of how the world is made up”. If that’s surprising to anyone else who reads this: welcome to the club.
The general understanding of distinct things is not a fundamental truth, it's a language tool to describe what we experience.
Things themselves do not exist outside of conceptual thinking. Unless we can get down to the true atomic source of everything, then maybe you've found the distinct "thing", but you cannot experience that, only describe it.
For example, my "body" is made up of the ultimate fundamental atomic unit just like everything around it. There is no line of delineation, it's just a sea of atomic units. My body only exists as a distinct "thing" because that's how we interpret this sea of units with our limited senses.
I see it a little like pixels on a display. The array of pixels all display light, they are all the same, but as we zoom out we begin to interpret the many pixels forming distinct objects on the screen. This does not change the fundamental reality that the pixels are not that thing, and that thing does not exist outside of our interpretation.
And all of this is assuming reality is fundamentally atomic, which has yet to be proven.
I agree with you after a clarification (I used “thing“ to mean any object, component, concept, meaning, perspective, etc. Seeing your example through that lens: a pixel is a thing, light is a thing, the objects and the images representing them are separate things, your perception is a thing, your awareness of your perception is a thing, and so-on):
In fact, it’s my opinion that it’s possible there is no Absolute Truth at all (if there’s any I suspect it might be math, but that’s way above my pay grade as it were) and that everything is simultaneously one, nebulous, and subjective.
However, for convenience we need to have these discussions about interconnectedness somewhere closer to the “We are humans interacting with a solid world” paradigm.
I mean, yes. If you stretch the meaning of link to mean “any many-to-many relationship”, then yes, just about every relationship can be represented as a link.
I feel this was not accepted well on HN because of lack of context. The author isn't trying to "solve the web" or propose a better solution for general file system. He's describing a solution to organize his note-taking system, nothing more, nothing less. Most of criticism here don't apply to this use-case.
I thought the article made a lot of sense in that context, having also struggled in past with organizing notes: nested folders vs flat structure, longer 'document' notes vs short blocks, tagging, inter-linking, discoverability...
For me, it is easier to think about this approach as "you can model it as a graph" and "you can solve this with graph algorithms". The argument seems to be too often:
* Look at problem space
* Recognize it has a graph structure
* Graph structures can be modeled with links
The last step seems unnecessary, or at least it can be generalized/implied after recognizing the graph structure.
The really difficult part seems to be: Combining all of the ideas. Distinguishing between the types of links seems to increase the complexity again, but using links as a general model gets more interesting (to me) if it actually ends up in a unified space.
The article's proposed mapping of tags and especially comments to links strikes me as unavoidably clunky in a world where one-way links are the only kind available.
Maybe I'd find this golden hammer argument more convincing in the context of a hypertext system where bi-directional links [1] were common and well supported.
My thought is that the author has discovered that a relationship can be represented as a link. That they therefore “are all you need” is logic I don’t grasp?
This resembles a brain which learns and organizes via association recognizing that it does so, and expressing this recognition via its own dominant association for “associations”. Comments show variations of other brains which do the same thing and recognize it expressing so via their own dominant associations for it. And so on. The challenge is as always how to manage the unmanageable, how to navigate the unnavigable, because there are too many links in total without some means of focusing/searching, or too few links due to excessive filtering, etc, etc, etc. Trying to find the right metaphor for this has wasted enormous amounts of resources, IMO, because that isn’t “the real problem”, but that’s exactly what every one of us is doing right now, so why not.
Links matter much, but alone do not suffice. While it's possible to traverse a graph looking for pretty anything doing so a link at a time is definitively not practical. We have arrived to Search Engines for a reason.
We need to been able to query our information base, and queries do need sometimes to offer more than mere full-text search. For that from tags to topic modeling many kind of meta-information are needed. They are hard, since automate them is hard, but they give back results that doing otherwise can't be equally reached so...
Quality of web pages is similar to economic models. The moment you find a model that predict web pages' quality, that model start to fail. A small percent of webmasters will be monetarily incentived enough to make most of the web point into them, the more pages they point to their websites, the more they can point into their websites using the profits they made from abusing the algorithm, until the whole model reach an equilibrium where links totally do not matter (gives a bad experience that other methods are better now).
I've built (back in the day) a forums search engine that utilized links count to sort results, and it was surprising how useful it was, like it made a usable search engine, not the best but usable, it felt a bit like google and less like forums search functionalities.
What's unique about pagerank is that the next best improvement that will give you as much benefits would be way more complex. So yeah its simplicity is its main merit.
Assuming the article is talking about web links, not some new technology, why didn't it mention the rel attribute which seems essential for modelling everything talked about, as link text is too arbitrary? Also the wording seems to imply no need for semantic differentiation but calling something tag, comment or link implies a lot about the intention, so i assume the the article meant something closer to "you can implement everything mentioned with html links"? There were some projects in the linked data/ semantic web space that were very close to this. I would love to see a revisit of this idea with json instead of xml/html and ipfs instead of http.
An example of where this paradigm shift works quite well is in the note management tool: Obsidian.md.
You can create notes within other notes with backlinks, you can tag notes and importantly, you can have dynamic blocks that access the note metadata like the outlinks and inlinks... Once I got comfortable with the dynamic blocks, I realised that I didn't need to tag my notes, I could just create links to topics and I found that approach to have some nice benefits. For instance, I started creating recipes and tagging them "Breakfast", "Quick", and I could then create a "Breakfast" note that displayed all the notes linked to it and had more control over what was being shown and write information around it.
I don't think I really agree about tags just being links to a nonexistent endpoint. I feel like there is some value in the curation of tags. And tags as a navigation mechanism are useful on their own.
> So, a folder could be expressed in terms of a page full of links.
This is basically what I do in Obsidian with the Zootelkeeper and Folder Note plugins. Every folder has a note attached to it with an autogenerated index of all the notes contained within. So for all my notes I use a hybrid of folders, indexes, and the graph view to navigate though everything. So far I'm not really needing to go more than 3 folders deep, mostly it's just 2 folders deep.
Here the idea is not so much 'links are you all need' as transclusion is all you need, in the form of dynamic popups/popins.
How can we have a ridiculous number of references? Make each one popup its metadata like an abstract. How can we have a ridiculous number of WP links for every term and concept? Pop them up, WP has an API. How can we cross-reference sections or arbitrary <div>/<span>s within a page? Make it popup. How can we cross-reference other pages without the dreaded hypermedia effect of "oh my god each page is 1 sentence long and I have to click through a thousand of them, screw you man"? Pop up with a mouse hover, effortless and fully recursive. How do we support backlinks, or indeed any arbitrary kind of metadata page about a link we may wish to create at some point? Just generate the HTML snippets and link to those snippets - popups do the rest. How do we implement tags? You'd better believe it involves simply creating a HTML page with all of the links with the 'tag' and then simply linking to it! How do we do topic-modeling with the coldstart problem of requiring heavy manual curation of all links (which is what OP suggests as a replacement)? Use topic-modeling to generate link suggestions while editing, and then link the useful ones (thereby making them visible when you edit and also generating backlinks automatically). When we can pop up anything, we can pop up raw data files like PDFs, or their syntax-highlighted versions (source code); we can even pop up pages on other domains (subject to a whitelist of having checked that X-FRAME headers don't kill it and it doesn't otherwise break horribly inside the iframe).