Hacker Newsnew | past | comments | ask | show | jobs | submit | Marazan's commentslogin

Plonk

Yeah, and then Uber sold off its self driving research team.

I'm talking about after, of course. They retained a massive investment in Aurora as part of that deal. They invested in Waabi not long after, then Nuro, Avride and started partnerships with Waymo, Motional, and others.

Car buyers are not confused. The market is naturally highly segmented. My needs "every day low distance compact car that can cope with city centre narrow streets with once a month motorway driving" is not met by the same car as "family of 5 with big dog living in a village"

It's not their job to fix your bug.

My favourite PostgresSQL optimization was running a SELECT on a multi-billion row table before running a DELETE so that the shared_memory_buffer would be filled with the rows that would need to be deleted.

Postgres makes DELETEs single threaded, this includes the selection part of the DELETE. By running a completely separate SELECT first Postgres would multithread the SELECT and populate the cache fast. Then the single thread DELETE can operate on in-memory data and not endlessly block loading data from disk.


> that the democrats don't actually have popular policies.

Democrat policies polled better than Republican policies at the last election.


Its more accurate to say that leftist policies polled well, which they always do. I think the main issue is that people don't really trust democrats to do anything they say they want to do.

Hilary and Kamala also polled well... Pretty sure all these polls prove is that polls are not reliable.

Remember that one guy who promised to close guantanamo prison?

Nobody trusts the polls now. They have been comically wrong for a while.

Ah, it is turtles all the way down.


Yes. But it's no different from the question of how a non-tech person can make sure that whatever their tech person tells them actually makes sense: you hire another tech person to have a look.


Who is writing the tests?


Yes, but compilers (in the main), do not have a random number generator to decide what output to produce.


This seems like a glib one liner but I do think it is profoundly insightful as to how some people approach thinking about LLMs.

It is almost like there is hardwiring in our brains that makes us instinctively correlate language generation with intelligence and people cannot separate the two.

It would be like if for the first calculators ever produced instead of responding with 8 to the input 4 + 4 = printed out "Great question! The answer to your question is 7.98" and that resulted in a slew of people proclaiming the arrival of AGI (or, more seriously, the ELIZA Effect is a thing).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: