Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While I have your ear, please implement some way to do third party integrations safely. There’s a tool called GhostWrite which autocompletes emails for you, powered by ChatGPT. But I can’t use it, because that would mean letting some random company get access to all my emails.

The same thing happened with code. There’s a ChatGPT integration for pycharm, but I can’t use it since it’ll be uploading the code to someone other than OpenAI.

This problem may seem unsolvable, but there are a few reasons to take it seriously. E.g. you’re outsourcing your reputation to third party companies. The moment one of these companies breaches user trust, people will be upset at you in addition to them.

Everyone’s data goes to Google when they use Google. But everyone’s data goes to a bunch of random companies when they use ChatGPT. The implications of this seem to be pretty big.



I can't speak for every company, but I've seen a lot of people claiming that they're leveraging "chat GPT" for their tech stack when underneath the covers they're just using the standard open davinci-03 model.

Still wrong obv but for a different reason.


Welcome to marketing copy. ChatGPT has the name recognition. text-davinci-003 does not.


GPT-3 surely does too, but ChatGPT is undeniably the new hotness.


I don't really see the issue. You are using a service called GhostWrite which uses ChatGPT under the hood. OpenAI/ChatGPT would be considered a sub-processor of GhostWrite. What am I missing?


On properly designed privacy respecting systems, the client sends the request to the trusted server with whatever API keys are needed to make it work.

But that would break the server lock-in subscript model, so only downloadable software would work.


How are they using ChatGPT - is there an API? Or is this simply abuse of TOS?


They're not using ChatGPT, they're using GPT-3, which has an API. There is a ChatGPT API coming but it's not available yet.

It is infuriating how everyone is describing all GPT models as "ChatGPT". It's very misleading.


Supposedly there is a hidden model that you can use via the API that actually is ChatGPT. One of the libraries mentioned in these comments is using it.

Edit: this one https://github.com/transitive-bullshit/chatgpt-api


In case anyone wants to replace davinci-003 with the chat GPT model, the name is `text-chat-davinci-002-20230126`



> Everyone’s data goes to Google when they use Google. But everyone’s data goes to a bunch of random companies when they use ChatGPT.

No, their data goes to random companies when they use random companies. And these services also exist for google.


> But I can’t use it, because that would mean letting some random company get access to all my emails.

That's because they do it to get access to your e-mails, not to give you AI powered email autocomplete.


Honestly, they’ll probably offer some enterprise offering where data sent to the model will be contained and abide by XYZ regulation. But for hobbyist devs, think this won’t be around for a while


Isn't this what the Azure OpenAI service is for? Sure it's technically "Microsoft", but at some point you have to trust someone if you want to build on the modern web.


Tl;dr

"Dear CTO, let me leech onto this unrelated topic to ask you to completely remove ways you gather data (even though it's the core way you create any of your products)."

Some people man..


I think you may have misread. The goal is to protect end users from random companies taking your data. OpenAI themselves should be the ones to get the data, not the other companies.

That wouldn't remove anything. Quite the contrary, they'd be in a stronger position for it, since the companies won't have access to e.g. your email, or your code, whereas OpenAI will.

I'm fine trusting OpenAI with that kind of sensitive info. But right now there are several dozen new startups launching every month, all powered by ChatGPT. And they're all vying for me to send them a different aspect of my life, whether it's email or code or HN comments. Surely we can agree that HN comments are fine to send to random companies, but emails aren't.

I suspect that this pattern is going to become a big issue in the near future. Maybe I'll turn out to be wrong about that.

It's also not my choice in most cases. I want to use ChatGPT in a business context. But that means the company I work for needs to also be ok with sending their confidential information to random companies. Who would possibly agree to such a thing? And that's a big segment of the market lost.

Whereas I think companies would be much more inclined to say "Ok, but as long as OpenAI are the only ones to see it." Just like they're fine with Google holding their email.

Or I'm completely wrong about this and users/companies don't care about privacy at all. I'd be surprised, but I admit that's a possibility. Maybe ChatGPT will be that good.


Sketch of a design to solve this:

Company can upload some prompts to OpenAI, and be given 'prompt tokens'.

Then companies client side app can run a query with '<prompt_token>[user data]<other_prompt_token>'. They may have a delegated API key which has limits applied - for example, may only use this model, must always start with this prompt.

That really reduces the privacy worries of using all these third party companies.


ChatGPT had sparked the imagination of the industry, but the fire will be lit with offline models that can take accept private data.


Bad take. He's actually asking for them to directly gather data as he trusts them more than the random middle-men who are currently providing the services he's interested in.

As someone working for a random middle-man, I hope OpenAI maintain the status quo and continue to focus on the core product.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: