Hacker Newsnew | past | comments | ask | show | jobs | submit | satuke's commentslogin

Why do you need all those permissions for signing up?


We use email metadata permissions to map your contacts, so you can find paths you might have to VCs or other Organisations

But importantly: We don't get access to the email body/content


I see but I don't think many people will be comfortable giving that permission, I personally am not. You might want to think about another way to achieve what you want to.


pretty cool what you've made


Thank you!

Unfortunately most users so far seem to suspect they are talking to an AI or that it's not real somehow ...


well, that's a challenge. maybe add a captcha to help with this?


Thought about it but ideally I want to do without captcha ... I think they can be a burden and may not be necessary for tiny niche sites like this.

For now I have some other bot mitigations in place so I wanted to see how far they'd get me.


No, I meant that with captcha the people who come in will have a sense of trust because they know that everyone has to do that test.


Ah yea sure, I'll consider it


how's this different than https://posthog.com/ ?


Posthog has a strong emphasis on developers, with features like feature flags and upcoming warehouses / CDP.

Flywheel is focused on go-to-market teams and non-technical users.


they're synced in realtime with your code. we have a proxy server running on your machine which talks to your website's editor and the codebase to sync the changes as you make them.


Thanks for pointing that out, I'll fix that! Here's a screenshot which might help you https://imgur.com/HlmOoK4

also added it on the website.


this is a very early version of a visual-editor that I'm building which lets you edit like you would in devtools/figma and sync those changes to your code. everything runs on the user's machine so there's no need to login. the aim here is to help you build visually when you want to without locking you into a platform.


Actually, that's a by-product of RLHF. A base model is usually not that verbose.


Isn't that exactly how humans learn to respond to stimuli? Don't we just try to predict the best next response to everything? Yes, It's statistics but the fun part is nobody is writing this statistical function by hand.


LLMs don't have a concept of "best". Only most likely in what they've been trained on.

I think LLMs ultimately just take imitation to a creative and sophisticated extreme. And imitation simply doesn't comprise the whole of human intelligence at all, no matter how much it is scaled up.

The sophistication of the imitation has some people confused and questioning whether everything can be reduced to imitation. It can't.

The ability to imitate seeking a goal isn't identical to the ability to seek a goal.

The ability to imitate solving a problem isn't identical to the ability to solve a problem.

Imitation is very useful, and the reduction of everything to imitation is an intriguing possibility to consider, but it's ultimately just wrong.


You need to think deeper.

There are levels of sophistication in "imitation". It follows a gradient. At the low end of this gradient is a bad imitation.

At the high end of this gradient is a perfect imitation. Completely indistinguishable from what it's imitating.

If an imitation is perfect than is it really an imitation?

If I progressively make my imitation more and more accurate am I progressively building an imitation or am I progressively building the real thing?

See what's going on here? You fell for a play on words. It's a common trope. Sometimes language and vocabulary actually tricks the brain into thinking in a certain direction. This word "imitation" is clouding your thoughts.

Think about it. A half built house can easily be called an imitation of a real house.


Ok, so now we need an example that separates humans from LLMs?

I struggle to think of one, maybe someone on HN has a good example.

Eg if I'm in middle school and learning quadratic equations, am I imitating solving the problem by plugging in the coefficients? Or am I understanding it?

Most of what I see coming out of chatGPT and copilot could be said to be either. If you're generous, it's understanding. If not, it's imitation.


It is very easy to separate humans from LLMs. Humans created math without being given all the answers beforehand. LLMs can't do that yet.

When an LLM can create math to solve a problem, we will be much closer to AGI.


Some humans created maths. And it took thousands of years of thinking and interaction with the real world.

Seems like goalpost moving to me.

I think the real things that separate LLMs from humans at the moment are:

* Humans can do online learning. They have long term memory. I guess you could equate evolution to the training phase of AI but it still seems like they don't have quite the same on-line learning capabilities as us. This is what probably prevents them from doing things like inventing maths.

* They seem to be incapable of saying "I don't know". Ok to be fair lots of humans struggle with this! I'm sure this will be solved fairly soon though.

* They don't have a survival instinct that drives proactive action. Sure you can tell them what to do but that doesn't seem quite the same.


Interestingly some humans will admit to not knowing but are allergic to admitting being wrong (and can get fairly vindictive if forced to admit being wrong).

LLM’s actually admit to being wrong easily, but aren’t great at introspection and confabulate too often. also their Meta cognition is poor still.


I guess LLM's don't have the social pressure to avoid admitting errors. And those sort of interactions aren't common in text so they don't learn them strongly.

Also ChatGPT is trained specifically to be helpful and subservient.


About this goalpost moving thing. It's become very popular to say this, but I have no idea what it's supposed to mean. It's like a metaphor with no underlying reality.

Did a wise arbiter of truth set up goalposts that I moved? I guess I didn't get the memo.

If the implied claim is "GPT would invent math too given enough time", go ahead and make that claim.


> Did a wise arbiter of truth set up goalposts that I moved?

Collectively, yes. The criticism of AI has always been "well it isn't AI because it can't do [thing just beyond its abilities].

Maybe individually your goalpost hasn't moved, and as soon as it invents some maths you'll say "yep, it's intelligent" (though I strongly doubt it). But collectively the naysayers in general will find another reason why it's not really intelligent. Not like us.

It's very tedious.


Other than complaining about perceived inconsistencies in others' positions, what do you actually believe? Do you think GPT is AGI?


No. I don't think anyone seriously believes that. AGI requires human level reasoning and it hasn't achieved that, despite what benchmarks show (they tend to focus on "how many did it get right" more than "how many did it fail in stupid ways").

The issue with most criticism of LLMs wrt AGI is that they come up with totally bogus reasons why it isn't and can't ever be real intelligence.

It's just predicting the next word. It's a stochastic parrot. It's only repeating stuff it has been trained on. It doesn't have quantum microtubules. It can't really reason. It has some failure modes that humans don't. It can't do <some difficult task that most humans can't do>.

Seems to be mostly people feeling threatened. Very tedious.


You can ask ChatGPT to solve maths problems which are not in its training data, and it will answer an astonishing amount of them correctly.

The fact that we have trained it on examples of human-produced maths texts (rather than through interacting with the world over several millennia) seems like more of an implementation detail and not piece of evidence about whether it has “understood” or not.


They also get problems wrong, in the most dumb way possible. I've tested it out many times where the LLM got most of the more 'difficult' part of the problem right, but then forgot to do something simple in the final answer--and not like a simple error a human would make. It's incredibly boneheaded, like forgetting to apply the coefficient it solved for and just returning the initial problem value. Sometimes for coding snippets, it says one thing, and then produces code which does not even incorporate the thing it was talking about. It is clear that there is no actual conceptual understanding going on. I predict the next big breakthroughs in physics will not be made by LLMs--even if they have the advantage of being able to read every single paper ever published, because they cannot think.


> LLMs don't have a concept of "best". Only most likely in what they've been trained on.

At temperature 0 they are effectively producing the token that maximizes a weighted sum of base LM probability and model reward.


I don't think that also humans in general have this concept of "best".

But humans are able to build certain routines within their own system to help them to rationalize.


> Isn't that exactly how humans learn to respond to stimuli?

Maybe it is, maybe it isn't. Maybe we are "just" an incredibly powerful prediction engine. Or maybe we work from a completely different modus operandi, and our ability to predict things is an emergent capability of it.

The thing is, no one actually knows what makes us intelligent, or even how to define intelligence for that matter.


Yes, if you are in the no free will school of thought, then that would be what humans do.


how is it different than s3?


Sounds like they provide a caching layer in between

"Every blob uploaded is stored in-memory to provide fast access — and then blobs that are not read frequently are eventually automatically moved to disk for low-cost, long-term storage. When you make a new read request to a blob in disk, it is reloaded back to memory, providing you the fastest possible access again."

For a developer-focused tool, it strikes me as odd that they wouldn't just use the term "caching" though?


Yes, I agree with your point. It reminds me a little of the ongoing debate over CDN vs Edge. However, one difference we offer is that eventually our 'cache' will flush to a persistent storage layer - so you don't have to think about managing your memory or disk resources. Data not used for weeks ends up in low-cost object storage - further saving you from high-cost memory storage costs and the repetitive task of performing this archiving operation yourself.


> eventually our 'cache' will flush to a persistent storage layer

What happens if the server goes down before the flush?


When a blob is saved, first I write it to Postgres (https://neon.tech), so that ensures there's a persisting backup. However, it's typically a waste of money to store infrequently accessed blobs in disk with Postgres over months and years. After 4-6 weeks data is offloaded to object storage - so that you benefit from low long term storage costs.

The lifecycle of a blob works out roughly something like as follows, based on last read date:

< 30 mins ago

- In-Memory: Cloudflare CDN

< 30 days ago

- In-Disk: Redis Auto-Tiering Memory/Disk combination

- In-Disk: Postgres

> 30 days ago

- Object Storage: Backblaze B2


Have you tried out https://underhive.in/ for this?


I haven't, but it looks like it's a Git repo hosting solution? This issue with using Git with data directly, is you generally loose the per-row/feature change information. With common binary GIS data formats, just putting them into Git looses a lot of the utility and will blow out the size of the repo as you apply changes.

Kart gives you row-level tracking, so you can see who made what change & when, and diffs small and fast to apply.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: