Hacker Newsnew | past | comments | ask | show | jobs | submit | more badtuple's commentslogin

It seems like the point being made is that because an LLM lives within the universe and can't store the entire universe, it would need to "reason" to produce coherent output of a significant length. It's possible I misunderstood your post, but it's not clear to me that any "reasoning" isn't just really good hallucination.

Proving that an AI is reasoning and not hallucinating seems super difficult. Even proving that there's a difference would be difficult. I'm more open to the idea that reasoning in general is just statistical hallucination even for humans, but that's almost off topic.

> Any model that trivially depends upon statistics could not do causal reasoning, it would become exponentially less likely over time. At long output lengths, practically impossible.

It's not clear to me that it _doesn't_ fall apart over long output lengths. Our definition of "long output" might just be really different. Statistics can carry you a long way if the possible output is constrained, and it's not like we don't see weird quirks in small amounts of output all the time.

It's also not clear to me that adding more data leads to a generalization that's closer to the "underlying problem". We can train an AI on every sonnet ever written (no extra tagged data or metadata) and it'll be able to produce a statistically coherent sonnet. But I'm not sure it'll be any better at evoking an emotion through text. Same with arithmetic. Can you embed the rules of arithmetic purely in the structure of language? Probably. But I'm not sure the rules can be reliably reversed out enough to claim an AI could be "reasoning" about it.

It does make me wonder what work has gone in to detecting and quantifying reasoning. There must be tons of it. Do we have an accepted rigorous definition of reasoning? We definitely can't take it for granted.


Reasoning and hallucinating are terms that are more shallow that are oftentimes used in discussions of this topic, but ultimately don't cover where and how the model is fitting the underlying manifold of the data -- which is in fact described by information theory rather well. That's why I referenced Shannon entropy, which is important as an interpretive framework. It provides mathematical guarantees and ties nicely into the other information compressive measures which do I feel answer some of the queries you're noting seem more ambiguous to you.

That is the trouble with mixing inductive reasoning sometimes with a problem that has mathematical roots. There are degrees where it's intractable to easily measure how much something is happening, but we have a clean mathematical framework that answers these questions well, so using it can be helpful.

The easiest example of yours that I can tie back to the math is the arithmetic in the structure of language. You can use information theory to show this pretty easily, you might appreciate looking into Kolmogorov complexity as a fun side topic. I'm still learning it (heck, any of these topics goes a mile deep), but it's been useful.

Reasoning on the other hand I find to be a much harder topic, in terms of measuring it. It can be learned, like any other piece of information.

If I could recommend any piece of literature for this, I feel like you would appreciate this most to start diving into some of the meat of this. It's a crazy cool field of study, and this paper in particular is quite accessible and friendly to most backgrounds: https://arxiv.org/abs/2304.12482


Honestly, on a non-toy project, build times with cgo are _brutal_. I agree with you usually, but when a build time on a beefy computer switches from under a second to >1min you notice it.

Linters and IDEs get slow when they check for errors, tests run slow, feedback drags, and all your workflows that took advantage of Go's fast compile times are now long enough that your flow and mental context disappear.

I'm way more lenient with other languages since the tooling and ecosystem are built around long build times. Your workflows compensate. But Go's tooling and ecosystem assume it compiles fast and treat things more like a scripting language. When that expectation is violated it hurts and everything feels like it's broken.


In my experience, encapsulating the access to sqlite in a go package helps a lot with avoiding recompilation of the c source, which indeed is brutally slow. It acutally seems to be way slower than compiling with gcc from the command line. Anyone knows why this is the case?


Why would you have to recompile sqlite every time?

I guess you just need to compile the .a once and then just reuse it?

If you're rebuilding it every single time, your build is set up wrong.


I'm curious about this too, but haven't been able to figure it out. I want to do some extremely basic detection on user specified videos and it'd be really slick to do it entirely in the browser.

Unless someone has a trick I haven't thought of though, I think I'll have to download it first which isn't nearly as cool :/


It's annoying because it's just the same-origin thing stopping it working.

I see there is an origin parameter[1] which sounds like it is nearly what is needed.

I don't know exactly what CORS setting is needed to make this work though.

[1] https://developers.google.com/youtube/player_parameters#orig...


He's an adult, let him make his own choices.


Surely this changes depending on the individual giving/receiving? Regardless of intent you have people who would prize both.

I've never worked in a place that had them, but I do know an individual who views it as a source of pride to collect as much as possible from peer bonuses. According to him it's not about the money, it just helps ease impostor syndrome since it's quantifiable and therefore suddenly the only "real" feedback he gets. Since it's quantifiable it renders any soft feedback he gets from his manager worthless in comparison.

Not saying that's common, just saying that's just one of what must be many many interpretations of the incentive structure.


Does anyone have an example of a codebase that uses Design By Contract to an extreme extent? You really only get a sense of the power of patterns like this when you see that power abused. I've greatly enjoyed writing 2 very small codebases in that style...but that doesn't mean I did it well or that it'd hold up after years of maintenance from a revolving door of developers.


Since this thread is very general, I'd like to specifically request anything that'd help one dive into Model Theory. Everything I've found labeled introductory is very dense, and I feel like I'm missing some prereqs it assumes. It's not easy to know what those are though.


Are you familiar with a general treatment of first order logic, including the completeness theorem? If not, I suggest starting there, for which I'd recommend:

- Chiswell & Hodges, "Mathematical Logic" - This is very clear and careful and gives lots of motivations for introduced concepts, but moves somewhat slowly (introducing FOL in three stages, first propositional, then quantifier-free and then full FOL)

- Leary & Kristiansen, "A Friendly Introduction to Mathematical Logic" (available online for free) - this moves somewhat faster and skips "lower" logics such as propositional logic and uses a different proof system. If you read up to Löwenheim-Skolem, you already have a little bit of Model Theory (the rest of the book is more about computability and logic).

If you've already done FOL, I would recommend:

- Kirby, "An Invitation to Model Theory". This doesn't presuppose much more than FOL (and even recapitulates it briefly) and some basic familiarity with undergraduate math concepts (e.g. groups, fields, vector spaces) and explains everything very carefully. I also think it has great exercises.


Introductory model theory has no particular prerequisites beyond elementary set theory and logic, but it does demand a fair amount of mathematical maturity. If this is your first exposure to higher math, you're probably better off starting with another topic. Real analysis or group theory would be the traditional choices.


The Wikipedia article has over a dozen references, including a number of online books.

https://en.wikipedia.org/wiki/Model_theory


I guess one question I'd have is why you think your brain is "operating properly". It's definitely perceiving and constructing a narrative out of those perceptions, but that happens differently for every brain and it really doesn't seem like there's a perfect version of it that's "correct".

If you're willing to accept that you can't really rank or judge modes of mental experience quantitatively on some kind of non-relative scale (which would require access to a "reality" outside yourself), then it shouldn't be too much of a leap to say messing with how it functions within tolerable bounds could be more of an optimization (or equal but just different) to your experience.


You're still not seeing the contradiction between making truth claims and having beliefs and the skeptical position you're entertaining. By your own standards, I could ask how you know there's something called a brain, that it has something to do with perception, etc. Maybe you hallucinated the brain? Maybe it's just some weird belief you have? Maybe hallucinations aren't a thing? How do you know what other brains do or don't do differently? And why can't some of them be wrong?

Attaching doubt to things "just because" isn't rational and cannot be resolved rationally precisely because such doubts are not rationally motivated. If I say to you "I doubt that you are here", for no reason other than some arbitrary skepticism about my perceptual faculties, then there is no way that that doubt can rationally be resolved. The very idea of hallucination presumes a normative perception. That we can know that we can misperceive or be subject to illusions itself presumes that we can tell the difference. Otherwise, we are just positing idle and detached possibilities while tacitly, and paradoxically, drawing on various convictions about the real.


There's no truth claims. The point is that under the influence of psychedelics, you realize you can't know the truth.

I understand you're looking at it from a scientific point of view, but the discussion is not scientific. Consciousness is probably the hardest body of knowledge to integrate with science. Nobody knows where consciousness comes from. Nobody knows how to measure it.

I can tell you: "I'm conscious and aware", but there's no way I can prove to you I'm not a philosophical zombie(someone that acts like it's conscious, but isn't). Currently, there's no way to measure consciousness.

>By your own standards, I could ask how you know there's something called a brain, that it has something to do with perception, etc. Maybe you hallucinated the brain? Maybe it's just some weird belief you have? Maybe hallucinations aren't a thing? How do you know what other brains do or don't do differently? And why can't some of them be wrong?

I think you nailed it. I don't know whether there's a brain, or if there is something else, and this something else is hallucinating this reality where there is a brain, or maybe something else entirely. Sure, we can make scientific claims about stuff when analyzing the reality within the bounds of our perception, but if you try to go beyondg that, you're own your own.


>I can tell you: "I'm conscious and aware", but there's no way I can prove to you I'm not a philosophical zombie(someone that acts like it's conscious, but isn't).

You can't prove that to yourself either, because a zombie has the same thoughts as you. You can't differentiate even subjectively whether you're a zombie or not.


Interesting idea, but I'm not sure I follow completely. Philosophical zombies may have the same "thoughts", but they do not have qualia, they act like they do, but they don't. As for me, I'm pretty sure I have qualia. I've been watching this movie called "my life" ever since I was born. If I were to be a philosophical zombie, it all would have passed in the dark.

I think the phrase "I think, therefore I am" is the essence here. A philosophical zombie does not "think". It looks like they do, but it's just a deterministic result of neurochemichal events going on in a brain that lives in the dark. Something like a neural network for instance. They can exhibit thought-like behavior without experiencing anything(as far as we know).


If a zombie could detect it doesn't have thoughts, it would report about it. This can be done with reflection. If a zombie can't do reflection, that would be an observable functional difference from human, which is not allowed by definition. Therefore a zombie knows it has thoughts the same way a human knows it. A zombie only doesn't have qualia based on the assumption that it's possible to think without qualia.


If you have thoughts, you can't doubt whether or not you have thoughts. In order to do so you need to doubt yourself, which is a thought.


Did you answer to the right comment? I mean you can't prove you're not a zombie, thoughts don't help with this, because a zombie has all the same thoughts.


A zombie has no thoughts, that's the point.


This had absolutely nothing to do with the article. Why did you decide to spend your time bashing something you don't use unprompted?


Agreed. I'm going on strike.


Three strikes and you’re banned forever, better choose the moment well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: