Same. The fact they're shoving AI into it and expanding it to providers who don't have privacy as a guiding principle is a key reason I'm sitting on a 14 Pro still, and why I'm exploring local alternatives with Home Assistant.
Besides, we just need to set verbal timers and control music. We don't need a full-blown verbal Oracle.
Home Assistant is indeed quite nice and relatively simple to set up with the Docker images provided by the team. Device setup on iOS was a little inconsistent, but has been rock solid for over a year. Check out Homebridge as well. I run both.
I ought to take a break from my Docker Compose work and get back to migrating off Homekit and into Home Assistant. The Home Assistant Yellow has been a real champ thus far, and once it’s set I can then tie the Unfolded Circle 3 into it for better control.
What value do you get out of Home Assistant you don't from HomeBridge? I use HomeBridge for a few devices, my Windmill AC, some Govee lights, and previously my Ikea smart lights (Tradfri, but now Dirigera supports HomeKit).
Not everything in life is a threat model, y’know; oftentimes it’s just personal preference.
I prefer to read reference material and do research instead of asking chatbots, for instance, because it helps the material stick better and enables me to make broader connections to disparate pieces of knowledge.
I also prefer technology to be narrow in scope and function, so I can spend more time enjoying life and less time troubleshooting why some needless complexity has failed again. This extends to voice assistants that consistently fumble on accents and grammar when asking for more complex queries, and often want to send data out of my LAN to some random server I have no control over just to process something that could be done on any of the myriad of GPUs and CPUs in my home instead.
Despite the EULA, TOS, and Privacy Polices governing these interactions, I intrinsically don’t trust a relationship that requires revalidation of those policies every time an update is pushed, whose changes fail to be summarized, and which force me into hostile relationships with the vendors. I also generally believe that as live services, there is no sufficient incentive for security or privacy but ample incentive for data mining and prolonged/frequent interactivity. Repeated incidents of supposedly “anonymous” and “private” conversations or data being inappropriately disclosed or compromised do not help lend any sense of security to said services, at least to me. Then you consider the wider economic environment prioritizing immediate gains over sustainable business practices, and my own personal preference for building and nurturing long-term infrastructure to solve my problems on a consistent basis, and it’s less a threat model and more just incompatibility between my personal needs and corporate goals.
What is your concern about prompts to go OpenAI? Apple has a contract with OpenAI that explicitly prevents them from logging, storing, training, or making any use of your prompts other than to satisfy the specific current request. Apple has some good lawyers and I’m sure that the teeth are prominent in that contract.
The person I was responding to had privacy concerns. The closest thing to a privacy concern about LLM usage on iOS is Apple Intelligence, which sends some prompts to OpenAI to fulfill them. Thank you for the information about Apple's privacy program.
I send hundreds of prompts to OpenAI's LLM daily. I do not have a concern about it.
Not to mention the fact that the default settings are to ask the user before sending anything to ChatGPT, and you can selectively disable just the ChatGPT integration while leaving Apple Intelligence enabled.
Besides, we just need to set verbal timers and control music. We don't need a full-blown verbal Oracle.