Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm somewhat bullish on the Edge Computing approach to this. For example, how about a co-op neighborhood LLM rack?


I currently use a TrueNAS as my home server, and I'm sure whatever I replace it with will add HW for local inference and act as my Home Brain for various automations and assistants.


Why would you do that, and when else has it worked?


  > how about a co-op neighborhood LLM rack?
It'll work about as well as those "wire your neighborhood for Internet as a collective" movements of the 1990s.

In fact, you'll need those to deal with latency issues, if my experience with consumer ISP quality is any indication.


Funny enough, I know two separate groups of people doing that. One in an apartment in SF running a shared Wi-Fi network with VLANs so they don't step on each other's frequencies, and another group sharing WISP infrastructure in semi-rural Utah where they can't convince ISPs to lay fiber. (Although I think the second group now uses that as a backup for Starlink.)


Yes, UBNT nanobeam 5ac AC wireless[1] is pretty dirt cheap. Can get a pair for about $199 for a wireless P2P link that can work over several kilometers. I have deployed a number of these myself, along with more high bandwidth airFiber stuff.

1. https://store.ui.com/us/en/pro/category/all-wireless/product...


How is “Edge Computing” any different than what the GP was stating about current cloud hosted (eg OpenAI)?


There are different possible approaches here, but for me, the main benefit would be of having control over the model(s) used. Other than that, low latency would be crucial for effective voice interaction.


I would much rather send my personal information to Microsoft and the NSA than people in my neighborhood




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: