Hacker Newsnew | past | comments | ask | show | jobs | submit | game_the0ry's commentslogin

This has been post before, but planetscale also has a great sql for developers course:

https://planetscale.com/learn/courses/mysql-for-developers


There for sure a "second brain" product hiding in plain site for one of the frontier AI companies. Google/Gemini should be all over this right now.

For fun I ran this query in AI:

> How many impovershised american children could you feed for the cost of one f35 fighter jet?

Here is the answer:

> Using a rough estimate of $110.3 million for one F-35A and about $3,500 per child per year to cover food assistance, that would feed roughly 31,500 children for one year

MPACGA -- Make Poor American Children Great Again.


The cost to operate a single jet is $6-7 million a year, so the total cost over its 30-40 year lifetime would be closer to $400m :(

There is some unintentional good marketing here -- the model is so good its dangerous.

Reminds me of the book 48 Laws of Power -- so good its banned from prisons.


Unintentional? This sort of marketing has been both Antrhopic's and OpenAI's MO for years...

Agree. I think they're intentionally sitting on the fence between "These models are the most useful" and "These models are the most dangerous".

They want the public and, in turn, regulators to fear the potential of AI so that those regulators will write laws limiting AI development. The laws would be crafted with input from the incumbents to enshrine/protect their moat. I believe they're angling for regulatory capture.

On the other hand, the models have to seem amazingly useful so that they're made out to be worth those risks and the fantastic investment they require.


They should pick a lane because it’s not very believable if you put these things into defense systems and in the next minute claim that humanity is existentially threatened. Either you’re lying, or ruthless, or stupid.

The new Power Mac® G4 with Velocity Engine®. So powerful, the government classifies it as a supercomputer and a potential weapon.



Oh no, pls don't ask about our product, its too good, its so X-Treme, it's Dangerously Cheesy

Off topic, but that site has really nice design

Mh, I couldn't read due to the huge contrast and had to switch to reader mode, so...

I personally find it to be perfectly readable. I've heard of people with issues with white text on a black background, but I don't fully understand it. Do you have astigmatism?

I do, although my astigmatism is pretty light and I wear glasses for it

What colors were you seeing? It's light white text on a black background for me-- both super common and plenty readable.

yeah same. It gives me a bit of a halo effect on letters, making it much harder to read (even w glasses). My astigmatism is pretty light and I wear glasses but it's still difficult to read for me

Really? I generally very much like to have a lot of contrast, but too much can definitely hurt my eyes.

I mean, I'm not a designer but it was interesting enough to call out.

For those curious about how sama got to where he got and stayed on top for so long, I recommend you read the book: The Sociopath Next Door by Martha Stout.

I am fairly confident when I say this -- sama is a sociopath. I don't know how anyone with solid intuition could even come to any other conclusion than the guy is deeply weird and off-putting.

Some concepts from the book:

> Core trait: The defining characteristic is the absence of conscience, meaning they feel no guilt, shame, or remorse.

> Identification: Sociopaths can be charming and appear normal, but they often lie, cheat, and manipulate to get what they want.

> The Rule of Threes: One lie is a mistake, two is a concern, but three lies or broken promises is a pattern of a liar.

> Trust your instincts over a person's social role (e.g., doctor, leader, parent)

Check and check.

OpenAI is too important to trust sama with. He needs to go. In fact, AI should be considered a public good, not a commodity pay-as-you-go intelligence service.


I was with you right up until the final paragraph, but this made me do a double take:

> OpenAI is too important to trust sama with.

...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.

The whole "super serious what-ifs" game is just marketing.


Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.

I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.


> I'm not even sure we're any closer to AGI than we were before LLMs.

I mean this is very obviously untrue. It'd be like saying we aren't any closer to space flight after watching a demonstration of the Wright Flyer. Before 2022-2023 AI could barely write coherent paragraphs; now it can one-shot an entire letter or program or blog post (even if it's full of LLM tropes).

Just because something is overhyped doesn't mean you have to be dismissive of it.


In hindsight there's an obvious evolutionary pathway from the Wright Flyer to Gemeni/Apollo/Soyuz.. but at the time in 1903 there absolutely was not, and anyone telling you so would be a crank of the highest degree. So it may turn out that LLMs have some place on the evolutionary path to AGI, or it could turn out they're a dead end like Cayley's ornithopters. Show me AGI first, then we can discuss whether LLMs had something to do with it.

In order to get to space, you must first be capable of flight through the atmosphere. That should be apparent to anyone even then because the atmosphere is in between space and the ground.

Regardless of whether spaceflight is still 1000 or 100 or 50 years away, you are still closer than you were before you demonstrated the ability to fly.


Point is that LLMs could be a local minima we are now economically stuck in until the hype wears off.

Or we could be stuck here for decades pending a breakthrough nobody alive today can even conceive of, or we could be compute limited by a half dozen orders of magnitude. Or it could happen next week. That's the nature of breakthroughs--you just can't have any idea when or how (or if) they'll happen.

I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

We only say a lot of CEOs are sociopaths because they're in that third category we haven't named, where they're very good at manipulating people, but also can feel conscience, guilt, remorse, etc, perhaps just muted or easier to justify against.

E.g. if you think you're doing something for the betterment of mankind, it doesn't really matter if you lie to some board members some year during the multi-decade pursuit.


That's not a third category, that's just a sociopath as seen by themself.

I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.


> I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

Yes that is the core trait I highlighted in the 1st bullet.


> I suspect there's some other category, which isn't really a sociopath and isn't really a not-sociopath, which we don't have a good definition for.

There is -- I call it "corpo sociopath." The corpo sociopath really comes out in the workplace, less so in personal life.


I think it’s learned sociopathy. People who start out knowing that a particular behavior is wrong, but over time are conditioned to feel like it’s fine, at least in certain situations (the corporate world being a prime example).

aaron called him sociopath back in 2010's

It's fairly obvious sociopathy is a prerequisite for top CEO jobs. Some just hide it better than others or have better PR people

I just realized how little I know about how async event loops.

I have lost faith in sama and openai management.

Technically, all journalism pieces on successful businesses are survivorship bias. No one writes and reads about the business that failed to find PMF.

And that’s the problem. You find the same traits they claim that made them successful by those who failed. Cherrypicking the winners creates a false reasoning of success.

Is it a problem? Bc I know there is survivorship bias and also still want to hear about the successes, so what is the problem then?

How many readers don’t know that? And how many fall for the false reasoning of those success stories?

You are assuming most readers are dumb. I do not have that assumption.

I may be paranoid but only run my ai cli tools in a vps only. I have them installed locally but never use them. In a vps I go full yolo mode bc I do not care about it. It is a slightly more cumbersome workload, bit if you have a dev + staging envs, then you never have to develop and run stuff locally, which brings the local hardware requirements and costs down too (bc you can develop with a base macbook neo).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: