Is it possible to create censorship-resistant IPFS based webapps? I mean, can't a webapp be combined with something similar to peertube and IPFS to create a public media player that you don't have to download neither the player nor the media?
An example of such a thing is the Libgen search interface hosted on IPFS (both data as well as webapp). If you have a means to directly navigate IPFS (that is, without using a proxy) it can be found here:
ipns://libgen.crypto/
If you do not yet have this set up the same thing can be reached through a proxy, e.g.:
The former (pure IPFS/IPNS) link is resistant to censorship as long as access to IPFS is available. The latter can of course be censored but once IPFS becomes mainstream the need for such proxies will disappear.
You need access to the internet, that's about it. IPFS can use any transport protocol (see section 3.2 in the whitepaper [1]), it uses a distributed hash table for routing purposes, content addressing to represent objects - these are immutable, once published they're available as long as there is a peer which has the object in cache or 'pinned' (permanently cached).
Read the whitepaper and install [2] a node of your own to get a feel of the thing, you'll soon find out it is an amalgamation of earlier peer to peer systems. The go-ipfs daemon tends to be quite busy, it averages somewhere around 30% CPU, 500MB memory, 0.1Mb/s in, 0.04Mb/s out when hosting ~3GB of (self-generated, niche-interest, database-related) files. This busyness is acknowledged by the developers and should be addressed somewhere down the line.
> IPFS can use any transport protocol (see section 3.2 in the whitepaper [1]),
In theory. In practice, the network (I checked my local node with ~2500 nodes connected to it) is mostly using quic over tcp/udp, more or less 50%/50% split between tcp/udp.
> This busyness is acknowledged by the developers and should be addressed somewhere down the line.
The "router kill" problem is a problem with those routers, not with IPFS, or any other chatty program for that matter. That said, IPFS is chatty and as such not something you'd run on a dial-up line. ADSL 4/1 or higher would be fine though, as long as you get a router which can take the load - just find something that can run OpenWRT and has enough memory and you're set.
Of course you don't need to run IPFS to get at IPFS-hosted content, there are plenty of gateways out there (one of them hosted by Cloudflare). Run your own node if you want to have full control over the path between IPFS and your instances, if you want to contribute to the decentralisation of the 'net or if you just like to tinker.
> The "router kill" problem is a problem with those routers, not with IPFS, or any other chatty program for that matter.
It's really hard to be convinced by that argument when go-ipfs is the only software that manages to kill peoples router until people reboot the router, when literally every other piece of software they use work perfectly, even when using bittorrent and other data-heavy protocols.
The same problem has occurred with many P2P protocols, just search for 'p2p router crash'. The problem occurs with Bittorrent, DC++, eDonkey and, yes, IPFS - as well as many other applications which open a lot of connections at the same time. This causes the undersized NAT connection tracking tables to overflow upon which the thing will no longer be able to create new connections.
I'm rather surprised that you think only go-ipfs causes these problems given that this is a well-known problem with lower-spec or misconfigured consumer routers, cable modems and other similar devices. Sometimes it can be solved by increasing the size of the tables (which often are set to some ridiculously low number like 1024 or 2048 places) if the device has enough memory. If this is not feasible just get a better device with OpenWRT or a similar free software distribution, configure it for 16K connections and it should work.
Why don't you look through the first issue I linked to earlier (3320, also read the linked issues for even more examples)? Plenty of examples of people having troubles with only go-ipfs, and other software is reportedly working fine.
Getting a new router is recommended in that issue to, but that's a band-aid, it doesn't actually solve the problem. BitTorrent and its various clients have been able to solve this, and since the issue is still open, it seems like Protocol Labs who are working on go-ipfs seems to think they can solve it too. Are you sitting on some information that Protocol Labs doesn't have, thinking that this issue can never be solved? I suggest you share your ideas in that issue in that case, so people can understand that go-ipfs is not for normal consumers with standard routers, and they need to make sure to get a proper router before trying this specific software.
I don't go through such issues because it works for me™ and I know the reason why it does not work for those routers. It is not as if I have a super-special router either, just a container running OpenWRT, earlier I used a Netgear WNDR3700 running OpenWRT [1], both worked fine. The problem can be solved, often without having to buy new hardware if the existing device supports one of the alternative distributions - OpenWRT being my personal choice but there are others. Since it, as stated, works for me™ without needing to do anything at all - I can only assume the problem to be related to the limited table space I mentioned earlier, something which still plagues not just IPFS but other P2P applications as well - just search for 'Bittorrent killing internet connection' for some examples. The solution here is to reduce the global connection count to a number which the router can handle. The same can be implemented in IPFS but the real solution is to make sure the router can handle a large enough number of connections.
Yes, that is the real solution and not a band-aid, at least if you want the net to become more decentralised - like I do, which is why I run IPFS and a host of other services. IAP-provided hardware often does not handle this load, both because they've contracted out to the lowest bidder for these devices as well as to disincentivise people to run services on their connection. If you're forced to use provider equipment make sure to get only a simple switch or transceiver (in case of a fibre connection) or modem, don't go for that shiny all-in-one box which promises a one-stop internet solution as that thing is a) controlled by your provider and as such b) in service of your provider first and foremost, enabling them to e.g. use your connection for wifi sharing outside of your control. It is also likely to be hampered by the mentioned problems. Get your own router and your own wifi access points (which can be combined with the router but don't need to be, re-purposed cheap routers running OpenWRT make for good access points) which are totally under your own control. Make sure the devices can run something like OpenWRT so you're not stuck with vendor firmware. Install OpenWRT on all devices and configure them to your liking and you're done.
Source: this is what I've been doing for about 30 years now, from back in the days before wifi was a thing, going from 10base2 ("thinnet") to 100baseT to gigabit, from no wifi through an Engenius/Senao 200mW 802.11b card [2] in the back of the server tower as AP - it covered the whole farm easily - through a WRT54GL running DD-WRT (killed by lightning), two Asus RT-N16 running DD-WRT (both killed by lightning), some cheap Sitecom thing running OpenWRT (killed by, you guessed it, lightning) through the mentioned Netgear WNDR3700 running OpenWRT and now a virtual OpenWRT router on the server-under-the-stairs, from 1.5/0.128 cable through on-demand dialup to ADSL 2/0.25 (2 modems killed by lightning), ADSL 8/1 "best effort" (4 modems killed by lightning) to gigabit fibre. I use a number of "Xiaomi Mi Router 4A Gigabit Edition" (yes, that is the name) running OpenWRT as access points, they were simply the cheapest (€29) option I could find which could a) run OpenWRT and b) had at least 2x2 MIMO. I would not use these things without replacing the firmware since I do not see the need to let Xi and friends in to my network but given that I was planning on doing so anyway this did not bother me.
So, get some reasonable hard/firmware and things should just work. They work for me™ after all...
Kill routers quickly. I've had to reboot various routers over the years when the buffers tracking states grow full (typically sharing a flat with friends where everyone runs torrent, direct connect or similar).
Yeah, probably a router bug that only go-ipfs manages to hit, so obviously it's the fault of the router, not the software that is the only one managing to crash the router.
No, actually, I'm going to continue believing that the only thing doing X, is the cause of X, because there is absolutely zero evidence of otherwise.
edit:
ok. the problem is almost certainly in connection management. note that the authors of rfc791 explicitly were trying to avoid keeping per-connection state in intermediate systems for this and other reasons.
so the market decided against that and built this whole NAT monstrosity. in any case though, inability to maintain these structures correctly or inability to manage out-of-memory conditions _must_ fall on that implementation. remote endpoints have no machinery to coordinate memory reservations on intermediate systems (alternate network layers designs that do keep per-connection state have so far failed...we can speculate why)
more pragmatically, any crash of any router software is an error. you can ask any network protocol developer that ever existed. if the originator emitted a malformed header, or failed to keep up its end of some complicated state management contract then it too has a bug, but the intermediate system is still responsible for handling that gracefully.
IPFS uses content-based addressing; it creates an address of a file based on data contained within the file. If you were to share an IPFS address such as /ipfs/QmbezGequPwcsWo8UL4wDF6a8hYwM1hmbzYv2mnKkEWaUp with someone, you would need to give the person a new link every time you update the content.
The InterPlanetary Name System (IPNS) solves this issue by creating an address that can be updated.
I don't see how it could be censorship resistant tbh. The data has to be served from a service somewhere, and I can't think of reasons IPFS nodes would be resistant to take downs.
IPFS is content-addressed, so as long as you have the ID of the thing you want to download, you'll be able to download and verify that download from any node. So as long as you can connect to one node that has your content, you'll be able to download it.
Of course, nothing is 100% censorship resistant, but content-addressing helps a lot.
I read that IPFS purposefully has mechanisms in it to allow banning content. While hypothetically you could still run a custom client that won't ban content, peered nodes might still ban the content leaving you with no source.
Based on what i read long, long ago - IPFS is very much not intended to be censorship resistant.
> I read that IPFS purposefully has mechanisms in it to allow banning content
I'd be interested in reading whatever article you got that from, because last time I checked, IPFS doesn't have any such mechanism.
You might be confusing it with the content blocking Protocol Labs does on the public IPFS gateway (https://ipfs.io/ipfs/hash). The gateway being a centralized gateway to distributed IPFS content, is hosted by a US party and must therefore follow US law, so sometimes they block content from being accessed via the gateway.
I can totally see how some nodes could block content, and refuse to propagate information on certain content addresses, but as far as I know, this would only slow down reaching the desired content, not completely block it—every node on the network would need to blacklist the banned address, which is pretty infeasible.
Also, as far as I know, the official daemon doesn't really have any functionality to block certain addresses like this.
For censorship resistance there needs to be incentivised replication such that the number of nodes in the p2p storage network (providing the data set or chunks of it) is very large and therefore “take down” becomes intractable. Also, clients need to retrieve data from/as peers in the p2p network rather than through gateways.
I think that's exactly what this is supposed to be. I mean, you have to _download_ it somehow, it just happens to be that this is downloaded in the browser, and played from there. No need for Peertube.
IPFS has a similar property as BitTorrent in that, as a piece of content is used more often, it is cached in more nodes on the network and becomes easier to find, which generally improves perf.
+1 My love for Winamp classic is bottomless. I would love to love IPFS as much. And information should be free. If your art can be consumed digitally, deal with that. I'm willing to let go of all the (supposedly) great art for the greater aesthetic of free culture. NFTs are noise but be my guest.
Safari's support for anything that Apple doesn't seem to use on their own web properties, is really crap. WebRTC and WebSockets also been lagging behind for a long time and when they catch up, they tend to have very buggy implementations.
Apple and Google's business incentives are in opposition. Google benefits from web apps replacing iOS apps, while Apple benefits from iOS apps replacing web apps.
Native apps can offer many benefits (such as power efficiency), but I expect that business reasons are equally if not more important.