I think of it not as "last step in thinking", but as "first contact with reality". Your mind is amazing and lying to you, filling in gaps and telling you everything is ok. The moment you try to export what's in your mind, math stops mathing. So writing is an important exercise.
This is fine. But companies seem to not have a control lever for employee wellbeing. If humanity works to solve problems, don’t you think overwork is also a problem that needs to be addressed?
“The rationale for the change, according to Rodriguez, is that interaction data makes company AI models perform better. Adding interaction data from Microsoft employees has led to meaningful improvements, he claims, such as an increased acceptance rate for AI model suggestions.”
Things the problem with promises. Of course using the data makes your model better, and everyone new this and that you’d be tempted to use it, and that is why you felt motivated to promise not to use it in order to gain adoption.
So rug pulling by saying it helps improve your model is meaningless. You can say you lied. Lying is a real word, it should be used when it applies.
The problem seems to be identify, a real problem, and looks like it will only get worse. Would creating a zero knowledge digital identity service (maybe centralized, maybe decentralized idk), where you prove you're human via your government id, passport, driver license, whatever, and the service can then attest you're a real person? So if I'm Digg, I would ask for some form of OAuth, the system would simply verify that you are in fact a human, and you would go on to create your account, forever verified. This way the identity service only does identity, it does not keep a record of where you are attesting, no logs, nothing, just your identity and basically saying yes/no, no sharing of ids or any other data.
So people would go through one hurdle in life, to get this id, and reuse it for every service.
Is this a worthwhile idea? I know many have tried, so help me poke holes in it.
No, the problem is people want everything for free. The solution is very simple. Charge $5 to open an account. Only allow a person to moderate one forum/community/subreddit/etc... Delete accounts that break rules ruthlessly. This would work, but no one on the internet wants to pay for a quality forum so we deal with the same crap over and over and over and pretend like there is some other soultion.
1/ KYC is pricey, and users might not want to pay for it
2/ Spammer can hire real people to farm accounts
I think this idea might work if we
- create reputation graph, where valuable contributors vote for others and spread reputation
- users can fine-tune their reputation graph, so instead of "one for all", user can have his personal customized graph (pick 30 authorities and we will rebuild graph from there)
I think apps that want assurance of your identity should pay for your kya. They want valuable people after all, and this should go into their CAC. Users still pay nothing, the identity service does not care about their info, after verification, it drops any details, like uploaded documents, whatever, keeps a certificate.
The cost for this service is likely keeping up with ID systems for multiple countries, infra and support.
Potentially, if this is made into a protocol, it can be decentralized kind of like the SSL system, so each country manages it's own rules.
I am less concerned here. If you plug in AI into your identity, I guess your identity is revoked. I see the problem though, that once a service notices you're an AI, there is no way to block you because we don't really know who you are, only that you're human.
So we need a mechanism that makes this identity verifiable, maybe you get a unique identifier from the identity service, so you can block the account. There is no mechanism to report you to say, the identity service (this is a bot), so you manage your own block list.
The risk here is fingerprinting, your id can be cross referenced across apps. Maybe here is where you implement a zk proof that you're who you say you are.
I don’t love the original idea because uploading identification is risky. You could just plug AI into a verified account but at least the vector is a single account instead of unbounded.
Can we just use something like AWS instance roles where the key isn’t even known to the agent? The sandbox is authenticated, the agent running there sends requests, the sandbox middlemans the request to perplexity, which authenticates the sandbox.
I see how this can boost productivity...for those that today already produce value voluntarily. These will move one level higher. The rest with 100x the amount of performative work. Everyone will be busier created presentations and charts that no one needs and no one will read. Managers will ask for new presentations and reports every sync, and hours will be spent discussing things that don't actually matter.
I think with the amount of AI content, it will all become cheap, there will be movie releases every day, people will get content fatigue, it will all merge into the same thing, nothing will be inspiring anymore. People will start looking for physical experiences. Movies that transcend the screen, think 4D experiences, but all the way, putting you in the center of the action, maybe you're the protagonist, etc.
I'd like to think I am pretty security conscious, but I still don't get the obsession with magic links (and passkeys). This is the one thing where I think I disagree with most of the industry. I thought forgetting passwords was a solved problem. I thought 2fa is much faster than searching for the last email for X provider the maybe takes 1 minute to arrive, requires retries and high tend up in spam? Some one please help me get on board.
It depends how convenient it is for you to constantly be carrying devices that have 2fa software or the correct SIM card installed. I might prefer to simply access my email account, which I know how to do anywhere.
reply