Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An example of such a thing is the Libgen search interface hosted on IPFS (both data as well as webapp). If you have a means to directly navigate IPFS (that is, without using a proxy) it can be found here:

ipns://libgen.crypto/

If you do not yet have this set up the same thing can be reached through a proxy, e.g.:

https://libgen-crypto.ipns.dweb.link/

The former (pure IPFS/IPNS) link is resistant to censorship as long as access to IPFS is available. The latter can of course be censored but once IPFS becomes mainstream the need for such proxies will disappear.

More on this project can be found here:

https://libgen.fun/dweb.html



> If you have a means to directly navigate IPFS (that is, without using a proxy)

What's the requirement for this?

Like, is this a me (local config) or them (ISP connection) issue?


You need access to the internet, that's about it. IPFS can use any transport protocol (see section 3.2 in the whitepaper [1]), it uses a distributed hash table for routing purposes, content addressing to represent objects - these are immutable, once published they're available as long as there is a peer which has the object in cache or 'pinned' (permanently cached).

Read the whitepaper and install [2] a node of your own to get a feel of the thing, you'll soon find out it is an amalgamation of earlier peer to peer systems. The go-ipfs daemon tends to be quite busy, it averages somewhere around 30% CPU, 500MB memory, 0.1Mb/s in, 0.04Mb/s out when hosting ~3GB of (self-generated, niche-interest, database-related) files. This busyness is acknowledged by the developers and should be addressed somewhere down the line.

[1] https://github.com/ipfs/papers/raw/master/ipfs-cap2pfs/ipfs-...

[2] https://dist.ipfs.io/ (get go-ipfs)


> IPFS can use any transport protocol (see section 3.2 in the whitepaper [1]),

In theory. In practice, the network (I checked my local node with ~2500 nodes connected to it) is mostly using quic over tcp/udp, more or less 50%/50% split between tcp/udp.

> This busyness is acknowledged by the developers and should be addressed somewhere down the line.

IPFS has been killing routers[https://github.com/ipfs/go-ipfs/issues/3320] and sending/receiving lots of network traffic[https://github.com/ipfs/go-ipfs/issues/2917] since 2016 and there hasn't been any notable improvements on that front yet. When is "down the line" in reality?


The "router kill" problem is a problem with those routers, not with IPFS, or any other chatty program for that matter. That said, IPFS is chatty and as such not something you'd run on a dial-up line. ADSL 4/1 or higher would be fine though, as long as you get a router which can take the load - just find something that can run OpenWRT and has enough memory and you're set.

Of course you don't need to run IPFS to get at IPFS-hosted content, there are plenty of gateways out there (one of them hosted by Cloudflare). Run your own node if you want to have full control over the path between IPFS and your instances, if you want to contribute to the decentralisation of the 'net or if you just like to tinker.


> The "router kill" problem is a problem with those routers, not with IPFS, or any other chatty program for that matter.

It's really hard to be convinced by that argument when go-ipfs is the only software that manages to kill peoples router until people reboot the router, when literally every other piece of software they use work perfectly, even when using bittorrent and other data-heavy protocols.


The same problem has occurred with many P2P protocols, just search for 'p2p router crash'. The problem occurs with Bittorrent, DC++, eDonkey and, yes, IPFS - as well as many other applications which open a lot of connections at the same time. This causes the undersized NAT connection tracking tables to overflow upon which the thing will no longer be able to create new connections.

I'm rather surprised that you think only go-ipfs causes these problems given that this is a well-known problem with lower-spec or misconfigured consumer routers, cable modems and other similar devices. Sometimes it can be solved by increasing the size of the tables (which often are set to some ridiculously low number like 1024 or 2048 places) if the device has enough memory. If this is not feasible just get a better device with OpenWRT or a similar free software distribution, configure it for 16K connections and it should work.


Why don't you look through the first issue I linked to earlier (3320, also read the linked issues for even more examples)? Plenty of examples of people having troubles with only go-ipfs, and other software is reportedly working fine.

Getting a new router is recommended in that issue to, but that's a band-aid, it doesn't actually solve the problem. BitTorrent and its various clients have been able to solve this, and since the issue is still open, it seems like Protocol Labs who are working on go-ipfs seems to think they can solve it too. Are you sitting on some information that Protocol Labs doesn't have, thinking that this issue can never be solved? I suggest you share your ideas in that issue in that case, so people can understand that go-ipfs is not for normal consumers with standard routers, and they need to make sure to get a proper router before trying this specific software.


I don't go through such issues because it works for me™ and I know the reason why it does not work for those routers. It is not as if I have a super-special router either, just a container running OpenWRT, earlier I used a Netgear WNDR3700 running OpenWRT [1], both worked fine. The problem can be solved, often without having to buy new hardware if the existing device supports one of the alternative distributions - OpenWRT being my personal choice but there are others. Since it, as stated, works for me™ without needing to do anything at all - I can only assume the problem to be related to the limited table space I mentioned earlier, something which still plagues not just IPFS but other P2P applications as well - just search for 'Bittorrent killing internet connection' for some examples. The solution here is to reduce the global connection count to a number which the router can handle. The same can be implemented in IPFS but the real solution is to make sure the router can handle a large enough number of connections.

Yes, that is the real solution and not a band-aid, at least if you want the net to become more decentralised - like I do, which is why I run IPFS and a host of other services. IAP-provided hardware often does not handle this load, both because they've contracted out to the lowest bidder for these devices as well as to disincentivise people to run services on their connection. If you're forced to use provider equipment make sure to get only a simple switch or transceiver (in case of a fibre connection) or modem, don't go for that shiny all-in-one box which promises a one-stop internet solution as that thing is a) controlled by your provider and as such b) in service of your provider first and foremost, enabling them to e.g. use your connection for wifi sharing outside of your control. It is also likely to be hampered by the mentioned problems. Get your own router and your own wifi access points (which can be combined with the router but don't need to be, re-purposed cheap routers running OpenWRT make for good access points) which are totally under your own control. Make sure the devices can run something like OpenWRT so you're not stuck with vendor firmware. Install OpenWRT on all devices and configure them to your liking and you're done.

Source: this is what I've been doing for about 30 years now, from back in the days before wifi was a thing, going from 10base2 ("thinnet") to 100baseT to gigabit, from no wifi through an Engenius/Senao 200mW 802.11b card [2] in the back of the server tower as AP - it covered the whole farm easily - through a WRT54GL running DD-WRT (killed by lightning), two Asus RT-N16 running DD-WRT (both killed by lightning), some cheap Sitecom thing running OpenWRT (killed by, you guessed it, lightning) through the mentioned Netgear WNDR3700 running OpenWRT and now a virtual OpenWRT router on the server-under-the-stairs, from 1.5/0.128 cable through on-demand dialup to ADSL 2/0.25 (2 modems killed by lightning), ADSL 8/1 "best effort" (4 modems killed by lightning) to gigabit fibre. I use a number of "Xiaomi Mi Router 4A Gigabit Edition" (yes, that is the name) running OpenWRT as access points, they were simply the cheapest (€29) option I could find which could a) run OpenWRT and b) had at least 2x2 MIMO. I would not use these things without replacing the firmware since I do not see the need to let Xi and friends in to my network but given that I was planning on doing so anyway this did not bother me.

So, get some reasonable hard/firmware and things should just work. They work for me™ after all...

[1] https://openwrt.org/toh/netgear/wndr3700

[2] https://www.solwise.co.uk/wireless-export-2511cdplusext2.htm


Kill routers quickly. I've had to reboot various routers over the years when the buffers tracking states grow full (typically sharing a flat with friends where everyone runs torrent, direct connect or similar).


unless you are insinuating that ipfs is deliberately exploiting some bug in your router, then that's all it is - a router bug.

a forwarding implementation should never crash. ever.


Yeah, probably a router bug that only go-ipfs manages to hit, so obviously it's the fault of the router, not the software that is the only one managing to crash the router.

No, actually, I'm going to continue believing that the only thing doing X, is the cause of X, because there is absolutely zero evidence of otherwise.


have you ever heard of protocol standards?

edit: ok. the problem is almost certainly in connection management. note that the authors of rfc791 explicitly were trying to avoid keeping per-connection state in intermediate systems for this and other reasons.

so the market decided against that and built this whole NAT monstrosity. in any case though, inability to maintain these structures correctly or inability to manage out-of-memory conditions _must_ fall on that implementation. remote endpoints have no machinery to coordinate memory reservations on intermediate systems (alternate network layers designs that do keep per-connection state have so far failed...we can speculate why)

more pragmatically, any crash of any router software is an error. you can ask any network protocol developer that ever existed. if the originator emitted a malformed header, or failed to keep up its end of some complicated state management contract then it too has a bug, but the intermediate system is still responsible for handling that gracefully.


> ipns

Please tell me this is a (Freudian?) typo.


InterPlanetary Name System (IPNS)

IPFS uses content-based addressing; it creates an address of a file based on data contained within the file. If you were to share an IPFS address such as /ipfs/QmbezGequPwcsWo8UL4wDF6a8hYwM1hmbzYv2mnKkEWaUp with someone, you would need to give the person a new link every time you update the content.

The InterPlanetary Name System (IPNS) solves this issue by creating an address that can be updated.

https://docs.ipfs.io/concepts/ipns/


IPNS, not to be confused with NIPS or PNAS.

https://nips.cc/

https://www.pnas.org/


IPNS - InterPlanetary Name System




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: