Hacker Newsnew | past | comments | ask | show | jobs | submit | n_u's commentslogin

I think they mean they would prefer more rigorous statistical analysis.

"Rigor cleans the window through which intuition shines" - Ellis Cooper


"Collapse" isn't within the statistical distribution though, so you'd still to apply judgement in any case. I suppose it's a word with many definitions.

> "Collapse" isn't within the statistical distribution though

Uh? Maybe you could explain what you mean by this a bit more.


1. It's not a rigorously defined term.

2. "System collapse" would be unexplored territory, so how would statistical analysis be able to infer when it occurs?


1. Not really. If the crash rates we're seeing under the Trump administration are higher than any similar length period in the last ~10 years, we should start to worry.

2. See above.


Are you a LLM? This comment is written twice in this thread and of your last 10 comments, 6 use the pattern "X isn't Y" or "X didn't Y, Z did"

https://news.ycombinator.com/item?id=47469767 > The concern isn't that AI reasons differently.

https://news.ycombinator.com/item?id=47469834 > The concern isn't that AI reasons differently.

https://news.ycombinator.com/item?id=47470111 > The problem isn't time.

https://news.ycombinator.com/item?id=47469760 > Airlines have been quietly expanding what they can remove you for. This isn't really about headphones.

https://news.ycombinator.com/item?id=47469448 > Good tech losing isn't new, it's just always a bit sad when it happens slowly

https://news.ycombinator.com/item?id=47469437 > The tool didn't fail here, the person did


Please don't take up space in the comment section with accusations. You can report this at the email below and the mods will look at it:

> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

> https://news.ycombinator.com/newsguidelines.html


I find it kind of helpful and interesting to see a subset of these called out with a bit of data. Helps keep my LLM detector trained (the one in my brain, that is) and I think it helps a little about expressing the community consensus against this crap. In this case, I'm glad the GP posted something, as it's definitely not mistaken.

Definitely AI. Every comment sounds like GPT.

I've found it's ok at Rust. I think a lot of existing Rust code is high quality and also the stricter Rust compiler enforces that the output of the LLM is somewhat reasonable.

Yes, it's nice to have a strict compiler, so the agent has to keep fixing its bugs until it actually compiles. Rust and TypeScript are great for this.

A big downside with rust is the compile times. Being in a tight AI loop just wasn't part of the design of any existing programming languages.

As languages designed for (and probably written by) AI come out over the next decade, it will be really interesting to see what dragon tradeoffs they make.


"cargo check" is fast and it's enough for the AI to know the code is correct.

I would argue that because Rust is so strict having the agent compile and run tests on every iterations is actually less needed then in other languages.

I program mostly in python but I keep my projects strictly typed with basedpyright and it greatly reduced the amount of errors the agent makes because it can get immediate feedback it has done something stupid.

Of course you still need to review the code because it doesn't solve logic bugs.


cargo check is faster; it's not fast

>Being in a tight AI loop just wasn't part of the design of any existing programming languages.

I would dare to say that any Lisp (Common Lisp, Clojure, Racket, whatever) is perfect for a tight AI loop thanks to REPL-driven development. It's an interesting space to explore and I know that the Clojure community at least are trying to figure out something there.


Quite sure it's not about the language but the domain.

Agreed. When I've written very low level code where there are "odd" constraints ("this function must never take a lock, no system calls can be made" etc) the LLM would accidentally violate them. It seems sort of obvious why - the vast majority of code it is trained on does not have those constraints.

Good article! Small suggestions:

1. It would be nice to define terms like RSI or at least link to a definition.

2. I found the graph difficult to read. It's a computer font that is made to look hand-drawn and it's a bit low resolution. With some googling I'm guessing the words in parentheses are the clouds the model is running on. You could make that a bit more clear.


I think the argument is that the compiler does not enforce that the error must be checked. It's just a convention. Because you know Go, you know it's convention for the second return value to be an error. But if you don't know Go, it's just an underscore.

In a language like Rust, if the return type is `Result<MyDataType, MyErrorType>`, the caller cannot access the `MyDataType` without using some code that acknowledges there might be an error (match, if let, unwrap etc.). It literally won't compile.


When you see .unwrap in Rust code, you know it smells bad. When you see x, _ := in Go code, you know it smells bad.

> But if you don't know Go, it's just an underscore.

And if you don't know rust, .unwrap is just a getter method.


One big difference is that with unwrap in Rust, if there is an error, your program will panic. Whereas in Go if you use the data without checking the err, your program will miss the error and will use garbage data. Fail fast vs fail silently.

But I'm just explaining the argument as I understand it to the commenter who asked. I'm not saying it is right. They have tradeoffs and perhaps you prefer Go's tradeoffs.


> When you see x, _ := in Go code, you know it smells bad.

What if it’s a function that returns the coordinates of a vector and you don’t care about the y coordinate?


Haven't jumped into rust for a while. Had to read up on what .unwrap() does.

   x, _ := 
With the topic of .unwrap() _ is referencing an ignored error. Better laid out as:

  func ParseStringToBase10i32AndIDoNotCare(s string) {
     i, _ := strconv.ParseInt(s, 10, 32)
     return i
  }
Un-handled errors in Go keeps the application going were rust crashes .unwrap().

Ignoring an output data value or set is just fine. Don't always need the key and value of a map. Nor a y axes in vector<x,y,z> math.


Go has tools for checking things like this. It's just not in the compiler. If you don't want to enforce that all errors are checked, go doesn't force you to. If you do, it requires you to run an extra tool in your build process.

(Or in your commit hook. If you want to develop without worrying about such things, and then clean it up before checkin, that's a development approach that go is perfectly fine with.)


> requires you to run an extra tool

And the more I work with Go, the less I understand why warnings were not added to the compiler. Essentially instead of having them in the compiler itself, one needs to run a tool, which will have much smaller user base.

But anyway, in Go, it's sometimes fine to have both non-nil error and a result, e.g. the notorious EOF error.


> if the return type is `Result<MyDataType, MyErrorType>`, the caller cannot access the `MyDataType` without using some code that acknowledges there might be an error (match, if let, unwrap etc.)

I think you can make the same argument here - rust provides unwrap and if you don’t know go, that’s just how you get the value out of the Result Type.


The big difference is that with `(T, error)` as a return type, any value on the caller side will look like a valid one (thanks to zero values).

  a, err := f()
  // whether you forgot to handle the `err` or not, 
  // the `a` carries a zero value, or some other value.
In rust it's not the case, as the `T` in `Result<T, E>` won't be constructed in case of an error.


> "the `a` carries a zero value, or some other value."

Or you could return pointers and use `nil` in the error case. Bonus is that it'll then panic if you try to use it without checking the error.

(Yes, I know, it makes everything else a faff and is a silly idea.)


> The security research community has been dealing with this pattern for decades: find a vulnerability, report it responsibly, get threatened with legal action. It's so common it has a name - the chilling effect.

Governments and companies talk a big game about how important cybersecurity is. I'd like to see some legislation to prevent companies and governments [1] behaving with unwarranted hostility to security researchers who are helping them.

[1] https://news.ycombinator.com/item?id=46814614


I'm not a lawyer, but I believe the EU's Cyber Resilience Act combined with the NIS2 Directive do task governments with setting up bodies to collaborate with security researchers and help deal with reports.

The law seems written to target vendors and products rather than services though, reading through this: https://www.acigjournal.com/Vulnerability-Coordination-under...


Original paper https://www.nber.org/system/files/working_papers/w34836/w348...

Figure A6 on page 45: Current and expected AI adoption by industry

Figure A11 on page 51: Realised and expected impacts of AI on employment by industry

Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry

These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)

A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.


Figure right after A6 is pretty striking. Ask people if they expect to use AI and a vast majority say yes. Ask if they expect to use AI for specific applications and no more than a third say yes in any industry. That should be telling imo. What we have is a tool that looks impressive to any non-SME for a lot of applications. I would caution against the idea that benefits are obvious.


Neat!

Reminds me of Draw a Fish https://news.ycombinator.com/item?id=44719222

and their security incident lol https://news.ycombinator.com/item?id=44784743


You can also edit it yourself and then ask a friend, relative, or colleague to read the parts you are struggling with improving. "Does this sentence flow? Is there a better way to say this? Is this confusing?"

If you're going to sink time into writing a book, it's worth spending some time editing it so your message gets through clearly. But that's just my opinion, your mileage may vary.


what does "frozen up" mean?


Corporate bonds were simply not being bought, at any price. Same with commercial paper. Nobody knew what firms were going to still exist in a week so nobody was willing to lend any money at all.


> Nobody knew what firms were going to still exist in a week so nobody was willing to lend any money at all.

Perhaps I'm misunderstanding, but isn't this another way of saying it was too risky for people to invest? That seems to be the same concept as the quote you cited from the parent comment: "either the return wasn't commensurate to the risk".


I guess you could say that but the underlying problem was that the risk was entirely opaque so it couldn't actually be quantified and hedged against. The TARP loan ("shakedown" might be a better term honestly) gave financial firms time to sort out what their actual positions and exposure were; there wasn't time to let the market sort that out over months and at the cost of every major company (even non-financials) failing because of lack of access to credit.


Yeah it seems there's a bit of asymmetry between a normal lender and the federal government here where as a normal lender you might not be able to lend enough to guarantee the debtor survives. Also what the gov decides to do may significantly influence the lender's behavior. If the lender thinks there's a chance the gov will bail them out, they would probably prefer that and not give a loan.

Whereas the federal government can write a check for $633.6 billion and be much more certain the debtors will survive and pay it back.


So the government has negotiated from a position where the average taxpayer could be buying $10 worth of assets for $1 and have a go at managing it properly and creating some wealth, to a position where the taxpayer pays $1, the government buys the $10 in assets and gives it to some wealthy idiot, and there is a nominal return which at that time I imagine went into killing people in Iraq because Muslims, amirite? All those bombs cost a bomb.

And then we see 20 good years of economic prosperity where the US predictably got even wealthier than it previously was and there is great political stability and well-loved presidents like Mr Trump who represent the satisfaction US citizens feel for the economic highs they have reached!

What a fantastic deal for the average taxpayer. Let the confetti fall. Well done government, saved the day there.


Where it went was bailing out the automakers. It was a big story at the time and I'm starting to worry people just don't form long-term memories anymore.


Who are you going to believe, your own memories or present-day propaganda on social media?


> the US predictably got even wealthier than it previously was

If you just look at the economic indicators, then it did. Certainly way better than the "no intervention" counterfactual would have gone. People do not like it when all the ATMs stop working.

There is a lot of discourse to be had as to why people aren't feeling that personally.

> killing people in Iraq because Muslims, amirite? All those bombs cost a bomb.

Sadly there is/was massive bipartisan support for this bullshit. Including from the public. I note from a chronology perspective that most of the money in Iraq was spent/lost/wasted before 2008.


The problem is circular. The risk is that your counterparty goes bust. Therefore nobody wants to make any moves until they can be sure that (mostly) every other player is stable. But because no moves are happening, that in itself is destabilizing.

That is, the big risk is "what if the state doesn't intervene?"

Correspondingly, the state has a special move that only it can play, because "what if the state doesn't intervene" is not a risk to the state itself. The act of intervening makes the risk go away. That's part of the privilege of being the lender of last resort with the option to print currency.

(which is why this was a much more serious problem for Greece and Ireland, which as Eurozone members were constrained in their ability to even contemplate printing their way out of the problem!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: