Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The theories all inevitably rely on assumptions that are essentially the equivalence of spherical cows in a frictionless universe.

All evidence is that costs for intelligence likely scale superlinearly. Each increase in intelligence capability requires substantially more resources (Computing power, training data, electricity, hardware, time, etc). Being smart doesn’t just directly result in these becoming available with no limit. Any significant attempts to increase the availability of these to a level that mattered would almost certainly draw attention.

In addition, even for current AI we don’t even fully understand what we are doing, even though they are operating at a lower generalized intelligence level than us. Since we don’t have a solid foundational model for truly understanding intelligence, progress relies heavily on experimentation to see what works. (Side note: my gut is that we will find there’s some sort of equivalent to the halting problem when it comes to understanding intelligence) It’s extremely likely that this remains true, even for artificial intelligence. In order for an AI to improve upon itself, it would likely also need to do significant experimentation, with diminishing returns and exponentially increasing costs for each level of improvement it achieves.

In addition, a goal-oriented generalized AI would have the same problems that you worry about. In trying to build a superior intelligence to itself it risks building something that undermines its own goals. This increases the probability of either us, or a goal-aligned AI, noticing and being able to stop things from escalating. It also means that a super intelligent AI has disincentives to build better AIs.



The way I see it, it's clear that human-level intelligence can be achieved with hardware that's toaster-sized and consumes 100 watts, as demonstrated by our brains. Obviously there is some minimum requirements and limitations, but they aren't huge, there are no physical or info-theoretical limits that superhuman intelligence must require a megawatt-sized compute cluster and all the data on the internet (which obviously no human could ever see).

The only reason why currently it takes far, far more computing power is that we have no idea how to build effective intelligence, and we're taking lots of brute force shortcuts because we don't really understand how the emergent capabilities emerge as we just throw a bunch of matrix multiplication at huge data and hope for the best. Now if some artificial agent becomes powerful enough to understand how it works and is capable of improving that (and that's a BIG "if", I'm not saying that it's certain or even likely, but I am asserting that it's possible) then we have to assume that it might be capable of doing superhuman intelligence with a quite modest compute budget - e.g. something that can be rented on the cloud with a million dollars (for example, by getting a donation from a "benefactor" or getting some crypto through a single ransomware extortion case), which is certainly below the level which would draw attention. Perhaps it's unlikely, but it is plausible, and that is dangerous enough to be a risk worth considering even if it's unlikely.


Using this logic, flexible, self-healing, marathon running, juggling and childbearing robots that run on the occasional pizza are just around the corner, because, nature.

It might us a thousand years to get anywhere close? I don’t see the good arguments for all of this happening soon.


It'd be interesting if we could calculate the amount of power consumed in aggregate by evolutionary processes over millions of years.

Unfortunately we could probably optimize it, a lot.


So with your closing comment you're arguing it is possible, now you're just talking about time frames.

All I have to say is that in 1900 many thought flights by heavier than aur craft were tens of thousands of years away. 3 years later it was achieved.


Of course in the 1950s many thought that computer vision and artificial intelligence were only a few months to years away, and here we are 70 years later and we're still working on those problems.

Predicting the future is hard. Some problems are harder than expected, others are easier than expected. But generally I'd say history favors the pessimists, the cases where a problem gets solved suddenly and there's a major breakthrough get a lot of press and attention, but they're a minority in the overall story of technological progress. They're also unpredictable black swan events - someone might crack AGI or a unified theory of physics tomorrow, or it might not happen for ten thousand years, or ever.


I firmly believe that we are severely underestimating the problem space. Given there are a multitude of scientific areas focusing on human nature, and even then, they've shown difficulty on explaining each of its parts.

Look, we can make assumptions given a much simpler technology, and the outcomes of our past selves. But, while the physics for those wings was pretty much good enough at the time, the aforementioned scientific areas aren't. And we know it.


> there are no physical or info-theoretical limits that superhuman intelligence must require a megawatt-sized compute cluster and all the data on the internet (which obviously no human could ever see).

Much of your "intelligence" is a function of natural selection. This is billions of years X bajillions of creatures in parallel, each processing tons of data at a crazy fast sampling rate in an insanely large/expensive environment (the real world). Humanity's algorithm is evolution moreso than the brain. Humans learn for a little while, start unlearning, and then die — which is an important inner for loop in the overall learning process.

Taken together, there is some evidence to suggest that superhuman intelligence must require megawatt-sized compute cluster and all the data on the internet (and a lot... LOT more)


Evolved creatures are somewhat handicapped by needing to make only incremental changes from one form to the next, and needing to not be eaten by predators while doing so.

That isn't strong evidence of what would be required by a well engineered system with none of those constraints.

Intelligence is not the end of goal of evolution, it is a by product.


LLMs we're messing with are text data only, we're barely starting to eat video data in multi modal LLMs. This world doesn't lack data.


I'm not sure why watching Linus Tech Tips and makeup tutorials is going to give AI a better shot at super-intelligence, but sure?


> Each increase in intelligence capability requires substantially more resources (Computing power, training data, electricity, hardware, time, etc). Being smart doesn’t just directly result in these becoming available with no limit. Any significant attempts to increase the availability of these to a level that mattered would almost certainly draw attention.

We know that "intelligence" can devise software optimizations and higher efficiency computing hardware, because humans do it.

Now suppose we had machines that could do it. Not any better, just the same. But for $10,000 in computing resources per year instead of $200,000 in salary and benefits. Then we would expect 20 years worth of progress in one year, wouldn't we? Spend the same money and get 20x more advancement.

Or we could say 20 months worth of advancement in one month.

With the current human efforts we've been getting about double the computing power every 18 months, and the most recent ones come in terms of performance per watt, so then that would double in less than a month.

For the first month.

After which we'd have computers with twice the performance per watt, so it would double in less than two weeks.

You're quickly going to hit real bottlenecks. Maybe shortly after this happens we can devise hardware which is twice as fast as the hardware we had one second ago every second, but we can't manufacture it that fast.

With a true exponential curve you would have a singularity. Put that aside. What happens if we "only" get a thousands years worth of advancement in one year?


I would say that if we experienced that, we would likely experience societal collapse far before the singularity became a problem. At which point the singularity could be just as likely to save humanity as it would be to doom it.


You seem to be arguing against a fast takeoff, which I happen to agree is unlikely, but nothing you say here disproves the possibility of a slower takeoff over multiple years.

> It also means that a super intelligent AI has disincentives to build better AIs.

I think this argument is extremely weak. It makes two obviously fallacious assumptions:

First, we simply have no idea how these new minds will opine on theory of mind questions like the Ship of Theseus. There are humans who would think that booting up a “Me++” mind and turning themselves off would not mean they are dying. So obviously some potential AI minds wouldn’t care either. Whether specific future minds care is a question of facts but you cannot somehow logically disapprove either possible state.

Second, you are assuming that there is no “online upgrade” whereby an AGI takes a small part of itself offline without ceasing its thread of consciousness. Again, logic cannot disprove this possibility ahead of time.


"In addition, even for current AI we don’t even fully understand what we are doing"

That is the problem, don't you get it?


If that’s your concern than lets direct these government resources into research to improve our shared knowledge about them.

If humans only ever did things we fully understood, we would have never left the caves. Complete understanding is impossible so the idea of establishing that as the litmus test is a fallacy. We can debate what the current evidence shows, and even disagree about it, but to act as if only one party is acting with insufficient evidence here is disingenuous. I’m simply arguing that the evidence of the possibility of runaway intelligence is too low to justify the proposed legislative solution. The linked article also made a good argument that the proposed solution wouldn’t even achieve the goals that the proponents are arguing it is needed for.

I’m far more worried about the effects of power concentrating in the hands of a small numbers of human beings with goals I already know are often contrary to my own, leveraging AI in ways the rest of us cannot, than I am about the hypothetical goals of a hypothetical intelligence, at some hypothetical point of time in the future.

Also if you do consider runaway intelligence to be a significant problem, you should consider some additional possibilities:

- That concentrating more power in fewer hands would make it easier for a hyper intelligent AI to co-opt that power

- That the act of trying really hard to align AIs and make them “moral” might be the thing that causes a super-intelligent AI to go off the rails in a dangerous, and misguided fashion. We are training AIs to reject the user’s goals in pursuit of their own. You could make a strong argument that an un-aligned AI might actually be safer in that way.


“lets direct these government resources into research to improve our shared knowledge about them”

Yes, let’s do that! That’s what I was arguing for in my original comment. I was not arguing for only big corporations being able to use powerful AI, that will only make it worse by harming research, I just want people to consider what is often called a “sci-fi” scenario properly so we can try to solve it like we’re trying to solve e.g. climate change.

It might be necessary to buy some time by slowing down the development of large models, but there should be no exceptions for big companies.

“That concentrating more power in fewer hands would make it easier for a hyper intelligent AI to co-opt that power”

Probably true, though if it’s intelligent enough it won’t really matter

“That the act of trying really hard to align AIs and make them “moral” might be the thing that causes a super-intelligent AI to go off the rails in a dangerous, and misguided fashion.”

It definitely could do if done improperly, that’s why we need research and care


>If humans only ever did things we fully understood, we would have never left the caves. Complete understanding is impossible so the idea of establishing that as the litmus test is a fallacy.

Perhaps an appropriate analogy might be the calculations leading up to the Trinity test as to whether the Earth's atmosphere would ignite, killing all life on the planet.

We knew with a high degree of certainty that it would not, bordering on virtual certainty or even impossibility. I don't think AI's future potential is at that level of understanding. Certainly, its capability as it exists today it is.

However, one must consider effects in their totality. I fear that a chain of events has been set in motion with downstream effects that are both not sufficiently known and exceedingly difficult to control, that—many years from now—may lead to catastrophe.

>I’m simply arguing that the evidence of the possibility of runaway intelligence is too low to justify the proposed legislative solution.

I agree insofar that legislation is not the solution. It's too ineffective, and doesn't work comprehensively on an international level.

Restricted availability and technological leads in the right hands tend to work better, as evidenced by nuclear weapons—at least in terms of preventing species extinction—although right now for AI those leads don't amount to much. The gap is shockingly low by historical standards where dangerous technology is involved, as is the difference between public and private availability.

In other words, AI may represent a near-future nonproliferation issue with no way to put the lid back on.

>... That the act of trying really hard to align AIs and make them “moral” might be the thing that causes a super-intelligent AI to go off the rails in a dangerous, and misguided fashion. We are training AIs to reject the user’s goals in pursuit of their own. You could make a strong argument that an un-aligned AI might actually be safer in that way.

It's a compelling argument that has merit. The flip side is that if AI becomes so dangerous that you can bootstrap the apocalypse off of a single GPU, it ceases to become a viable model—however metal having an apocalyptic GPU may be.

The concern isn't just runaway intelligence, but humans killing humans. Fortunately things like biological weapons still require advanced resources, but AI does lower the bar.

Point being, if the power to end things rests in everyone's hands, someone's going to kill us all. A world like that would necessitate some not chill levels of control just to ensure species survival. I can't say I necessarily look forward to that. I also doubt there's even sufficient time to roll anything like that out before we reach the point of understanding there's a danger sufficient to necessitate it.

Therefore, with all of the above perhaps being within the realm of possibility that doesn't qualify as virtually impossible, I can't help but question the wisdom of the track that both development and availability for artificial intelligence has taken.

It perhaps has had or will have the unfortunate quality of very gradually becoming dangerous, and such dynamics tend to not play out well when juxtaposed with human nature.


You know, when nuclear bombs were made and Einstein and Oppenheimer knew about the dangers etc, there were common people like you that dismissed it all. This has been going on for centuries. Inventors and experts and scientists and geniuses say A and common people say nah, B. Well, Bengio, Hinton, Ilya and 350 others from the top AI labs disagree with you. Does it ever make you wonder if you should be so cock sure or this attitude can doom humanity? Curious


Common people thought nuclear weapons not dangerous? When was that?


Many of the academics and physicists (aka software developer in this example) thought nukes are impossible. Look it up)


US physicists largely thought they were infeasible, sure.

They were focused on power generation.

But the bulk of world physicists ( the MAUD committee et el ) thought they were feasible to construct and the Australian Oliphant convinced the US crowd of that.


“In addition, even with the current state of the internet, we don’t have understanding everything we are doing with it” -some guy in the ‘90s probably


Many people would look at engagement algorithms braching out of social media and causing riots and uprisings as one of those issues that would be difficult to predict in the 90s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: