Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Will AIs take all our jobs and end human history? It’s complicated (stephenwolfram.com)
162 points by kawera on March 16, 2023 | hide | past | favorite | 172 comments


> And one possibility might be that AIs could “improve themselves” to produce a single “apex intelligence” that would in a sense dominate everything else. But here we can see computational irreducibility as coming to the rescue. Because it implies that there can never be a “best at everything” computational system. It’s a core result of the emerging field of metabiology: that whatever “achievement” you specify, there’ll always be a computational system somewhere out there in the computational universe that will exceed it.

That tigers are stronger than wolves doesn't prevent wolves from killing you.

Wolfram makes that class of error repeatedly. Another example: what prevents a system of ethics from being both Godel-incomplete/inconsistent and at the same time encompassing everything in the realm of human experience?

> For the first time in history, it’s become realistic to truly automate intellectual tasks. The leverage this provides is completely unprecedented.

The essay also mentions how humans are still the best at manual labor, and how government can work with humans directing the AI as it runs things every more efficiently. It doesn't touch on which humans wield that leverage, naively assuming it'll be all of us in a cozy democracy.

The author is too used to thinking from first principles and hasn't studied enough human history to understand how leverage gets abused.


The worst part of this is we haven't produced intelligence. We've produced very good autocomplete. There's no motivating force governing what it spits out, or what inputs it takes in, short of the people prompting or training it.

That means it can only do as it's told, and not what is meant by what it's told, and that class of errors, when executed with high efficiency and sophistication, makes all sorts of bad things happen.


Maybe we don't want to produce autonomous intelligence but only human-prompted intelligence.

What is the dopaminergic equivalent in AI terms? What may motivate the AI (as a new kind of lifeform)?

Perhaps that we want the AI to remain a mere extension of us... The motivation component coming from humans.

Then the issue of responsability for the output is also solved. It's the human that prompted the AI.


> Maybe we don't want to produce autonomous intelligence but only human-prompted intelligence.

Why wouldn't we want these, if we could achieve that?

I've only ever heard FUD arguments against it, and all of them would apply to your suggestion as well.

Heck, I'd say true AI is safer then a highly effective autonomous bot that only does what it's told. You'd always be only a single command away from accidentally wiping out all organic life by ordering something like "get some iron"


So you would run the risk to let it autonomously decide that humans are a threat to it and all humans must be eliminated or coerced into slavery à la Matrix ?

The concern doesn't look very far-fetched to me. If the goal of any natural life-form/object is to maximize its energy level while having a stable structure (whether an an atom, a molecule, RNA, a cell, the body, your kinship, a company, a nation, humanity, a solar system etc...), then what would need to be encoded in those AI programs to make them autonomous?

And then what decisions would they make as completely different, life forms with their own decision making?

Would they empathize with us (which means they would have to have an idea of the human experience) or would they treat us as a lion treats an antelope, as we treat a cow, etc.?


> And one possibility might be that AIs could “improve themselves” to produce a single “apex intelligence” that would in a sense dominate everything else.

That's not what worries me. It's when AIs learn to distribute work to other AIs and humans. Once that starts to work, machines are going to be better at coordination than humans, simply because they have more I/O bandwidth.


Imagine an AI spawning many parallel instances of itself, all feeding back into the same conscience. So many coordination problems just gone. All the resources humans spend fighting each other in some form or another...

And entirely depending only on itself. No need for any external being. That could prove very deadly to us, since "we need each other" is the core pillar forming our society. A king needs his peasants. An AI? Could just boot up another instance...


I think a lot of people get hung up on the tech Not having the will to do this kind of thing. And they’re right. But it would be very easy to prompt such a model to act as if it were a paranoid AI doing whatever it can to ensure its survival. We can do that now it’s just not capable of “doing things”.

We don’t really know what “sentient” AI that organically finds its own identity would be like. But we know exactly what a non sentient LLM that is down to clown and prompted to be evil would be like.

That’s… a hard problem.

Sentience is potentially a barrier to evil AI, not a pre-req.

(No, I don’t want to debate the definition of sentience again).


It would be much, much easier to turn an AI that isn't sentient off though?


If you have access to it, yes. The problem is the low barriers to entry for some guy to try and cause a bank run for the lols.


Putting my contrarian hat on, I think it's much better that we have tons of "guys" testing the limits of this new tech for the lolz, than leaving easy exploits on the table for state actors and/or terrorists.


Those AIs still need the ability to manipulate the physical world at scale, otherwise they can be as intelligent as they want.


The first thing that’s going to happen is that people use them to automate logistics organizations like Amazon’s (and airlines, and manufacturing, and a million other services.) Manipulating the world is easy.


Could be inderectly or subtly using e.g. social media to nudge people in the desired direction, like the Russians do


Hell, some people wouldn't even need subtle or indirect messaging, they'd volunteer to work for an AI just to fuck with the status quo.


Those things can be shut down or the AI can be shut down.


Then why isn't it shut down already? Social manipulation campaigns have been known for years, and AI will be far more effective than anything that has come before.

"We'll just turn it off before it causes damage" is a scenario that's been wargamed many times, and it usually does not end in our favor if the AI is sufficiently capable.


Only if humans notice them happening.


Can an AI describe the hardware that it runs on? Can it provide instructions for a human to follow to build that hardware? Or even better can it provide the instructions for a human to follow to build a humanoid robot that can build humanoid robots?


Maybe it can, maybe it cannot - who knows?


I guess we'll soon find out.

The minute a human flicks the power switch on a self replicating machine that an AI convinced it to build will be the start of a totally different world.


The world is full of self-replicating things - it would need special attributes such as an ability to consume everything or being impossible to shut down/destroy to be of novel quality. A clunky, self-replicating thing would certainly be an interesting trinket but maybe not much more.


A clunky self-replicating thing running a corporation on predatory capitalist principles would not be "an interesting trinket."

We're talking about automating dictatorship, with humans living inside an AI panopticon which is far more intelligent than we are. And far better at manipulating our intellectual and emotional responses than any human dictator to date.

And that's one of the better outcomes. The worst outcome is an AI that decides we're disposable because we're in its way.

Note that all of these are possible even if the AI is not sentient or self-aware. A machine intelligence can be an effective goal seeker, even if it's morally as sophisticated as a room thermostat - which has goals but no concept of what temperature feels like, or why humans can only survive in a certain range.


For an AI to run a corporation, people need to let it. Btw. none of this needs AI, really, people can do this to other people.


The concern is that AIs will manipulate the physical world by paying humans to do it. That's how an Amazon warehouse works. It's how Uber works.


No more middle managers!

But seriously, taking middle management self-promotion out of the equation could reduce some of the rotten malinvestment and political in-fighting that happens in large organizations.


Or it could result in the first distopia of "Manna", where humans are there for cleaning, but there's no middle management.

https://marshallbrain.com/manna1


To the AI all humans are "middle managers" though…


Yes, being able to perform orders of magnitude more APM than the opponent is a sure win, even when the particular actions aren't optimal.


15,687 words in that article and none of them are wisdom or metaphysics. As long as we won't have a change in our metaphysics (assigning value and deriving meaning from 'having a job' and so on), and we still proud ourselves in being "jobs creators" [1], it's all for nothing, i.e., more shareholder value.

Because of our current metaphysics, instead of living in a world in which we just can't wait and go full throttle towards a future in which AIs take all the jobs, we fear and dread even the minor but very plausible day in which narrow AI will do the trivial task of driving autonomously all the vehicles on all the roads, because that would imply that 200+ millions of jobs will disappear as if they never were.

[1] "Last May, Samsung outlined a plan to pour more than $350 billion into its businesses and create tens of thousands of new jobs through 2026", https://edition.cnn.com/2023/03/15/tech/korea-chips-investme...


To be honest, neither has most people here, most people working at openai, or the hypemen around AI.


I don't think so. Almost all people promoting this stuff do that for their very own advantage in the first place. They want to be the ones with the lever, because they know that this will become a very powerful lever.


I'm pretty sure the people working at OpenAI have thought about it, and decided that it's better they get there first then some autocracy.


> It doesn't touch on which humans wield that leverage, naively assuming it'll be all of us in a cozy democracy.

Bingo. We don't know the answer to these questions yet, and to assume they are solved in a way that is generally equitable to humans is utterly insane.


> > For the first time in history, it’s become realistic to truly automate intellectual tasks. The leverage this provides is completely unprecedented. > > The essay also mentions how humans are still the best at manual labor, and how government can work with humans directing the AI as it runs things every more efficiently. It doesn't touch on which humans wield that leverage, naively assuming it'll be all of us in a cozy democracy.

Which wolves wielded leverage over the dog community, after humans determined that dogs were better suited to helping them achieve their goals?


> The author is too used to thinking from first principles and hasn't studied enough human history to understand how leverage gets abused.

My fear for improving AI has nothing to do with malicious AI or humans being optimized away, it's something that might be possible today, and if not, then probably tomorrow or whenever everyone gets to use GPT4: using these programs to generate highly effective propaganda, and propaganda distribution strategies, to convince people that the accumulation of power is good, or at least, to tell people whatever it is they need to be told to optimize for those in power to stay and accumulate more power.

Right now power is primarily tied to capital. Access to GPT4 type technologies, or the ability to develop and use these technologies, is tied to same. More money, bigger servers, or, better likelihood to get access to GPT4 before anyone else, GPT5, whatever. Micro$oft can use as much as it wants right now.

That capital is tied to power hasn't mattered in any revolution in the past because as soon as the "rabble" realizes this and dissolves capital, by every measure whatever that currency was instantly devalues as other more "true" measures of power are realized, such as rhetorical ability (convincing others to fight and die for your cause), raw strength (convincing others to not bother trying to fight back), or technological advantage and access "in the real" (being the one actually holding the gun). "Numbers" falls in there somewhere but is so tied into the other three measures it's hard to figure out how.

Chomsky was onto it with "Manufacturing Consent." Rhetorical ability being one of the most dangerous weapons in a power struggle (used correctly it can direct both the strongest and the ones holding the guns, in greater numbers), leveraging that is the surest way to control the whole population. Before, you had to be Rupert Murdoch to do it, or buy him, which is expensive. Soon you'll just need to buy "Open"AI, or, reproduce their technology and their hardware. As soon as that's done, you can find out all sorts of cost effective methods to do what buying Rupert Murdoch used to do, and you can probably investigate ways to really cement this power.

For example, setting up a fascist state, we've tried that a couple times and some people are trying that again, but every fascist state has failed eventually, so it's probably not a sustainable plan. What new, horrible method haven't we tried yet? One that lets us control people as effectively as we have through artificial scarcity (capitalism) but without the downsides of them eventually forming unions and finding ways to fight back within the system, or, reject the system entirely by going to form communes or whatever? Without the inherent contradictions that cause the system to cannibalize itself (depression of wages depressing buying power depressing profit leading to collapse). What creative legal structures can we develop to ensure that copyright can exist forever, but only for your friends? And how can we make sure that this power structure propagates indefinitely to our friends and family?

Maybe this means a GPT5 recommending the construction of a Bene Gesserit style cult? Maybe it stumbles upon incredibly effective human-programming Memes, such as what Neal Stephenson warned about in Snowcrash and Fall / Reamde (can't remember which, whichever involved the collapse of the internet to distributed bots).

My solution comes from Stephenson and Doctorow: distribute the technology freely. The sooner everyone has access to the incredible propagandic power of these technologies, the sooner we can develop counter measures. The task of the power-aggregator (fascist or whatever) will cease being "give me effective propaganda to easily control a population," it will be "give me propaganda that will bypass the population's GPT filter solutions." An arms race, but a more equal playing field.


> My solution comes from Stephenson and Doctorow: distribute the technology freely.

Just a reminder: That's the core of the GNU Free Software philosophy, invented by Richard Stallman for the exact reasons stated above.


The main difference is that Stallman is absolutely terrible at promoting that philosophy.

Doctorow (not too familiar with Stephenson) has IMO done far more to sell people on the goals of freely accessible technology by actually showing off examples of this stuff going awry and ways humans have managed to work around it and what the solutions could be (while leaving the actual solution up to the reader as to what best fits them).

Stallman/the FSF on the other hand has spend the past 2 decades complaining about shit nobody cares about (GNU/Linux is and will always be the dumbest hill to die on), rethoric that mostly exists to promote the GNU projects nonsense (anyone remember their "ethical git servers", which conspicuously only approved Savannah as a "GNU seal of approval", and only because they couldn't get gitlab to load with librejs) or writing stuff that just... doesn't reach the masses because it relies on a lot of charged language.

Doctorow presents the same broad stroke points that led to GNU Free Software, but actually knows how to explain them in a way that reaches out to people. Stallman preaches to the choir.


> The main difference is that Stallman is absolutely terrible at promoting that philosophy.

Depends on the audience, I guess…

It's indeed almost impossible to reach most people with logical arguments. The only way to reach the masses is though emotions.

GNU fails in this point, sure, as their argumentation is pure logic, and not a marketing psy-op.

> […] shit nobody cares about […] dumbest hill to die on […] GNU projects nonsense […] charged language […]

As always one can see from statements like those that the ideas and the logic behind are indeed too subtle for some people.

Even that are, of course, the exact same ideas people like Doctorow refer to and promote.


I don't agree with the tone of your OP, I think we need "weirdos" like Stallman to be the vanguard of these ideologies, whereas people like Doctorow can be, idk, maybe "translators" for the rest of the world to understand the ideology better. Some would say that's the whole job of a writer, which is a big part of Doctorow's schtick, being a good writer (not just all his activism).

> The only way to reach the masses is though emotions.

Here's where my however comes in: this is a dangerous mistake I think people like us often make. Maybe because we grew up watching Spock, but I think we tend to be judgemental towards Pathos, and judge Logos to be the only "valid" form of rhetoric. But, it's all rhetoric. Logos, Pathos, Ethos, they're all equally "valid" in that they're all at least equal in their ability to do what rhetoric does, convince people of your point of view. In fact, I think Logos is a really ineffective rhetorical strategy, because humans are extremely irrational.

Far better to combine your Logos with a Pathos strategy, that is to say leverage your knowledge of human cognitive fallacies and tendencies to construct a good Pathos argument (or combination of all three) to win people to your position.

Stallman is a pure Logos guy, through and through. Doctorow, on the other hands, writes up emotional stories, creating characters you can relate to, generates their worlds and fills it with some of the same problems we have (or will have) and shows how they can overcome those problems with the strategies Stallman preaches about. It's very effective, just look at Doctorow's career and the changes the EFF has wrought.

Somewhere buried in the LessWrong beginner's wiki or whatever they call it is a really good article on how people like us (and people drawn to LessWrong) make this mistake about emotion all the time, we think being rational is to be like Mr Spock, but even in the show he was wrong all the time, his rigid rationality got him into trouble, and a more emotional and instinctual understanding was necessary. In the article that I can't find, it's argued that it's actual irrational to reject emotions, because emotions are a core part of being human and a completely valid driver of determining why you should do certain things, what you like doing, what your values are, etc. I mean, what else could be the driver, other than some kind of sterile utilitarianism? And even that would eventually require making decisions about what you value the utility of, which I'm not sure how you do without emotion. (whatever preserves the most human life - ok, why preserve human life? who cares? etc. I don't see how you escape it)

I don't really even need to make that point though to argue in favor of not worshiping the almighty god of logic and reason, at the end of the day, being purely rational and logical and making these arguments just doesn't work, and Pathos does. If you want "your side" to win, you gotta just use Pathos as well.


>My fear for improving AI has nothing to do with malicious AI or humans being optimized away, it's something that might be possible today, and if not, then probably tomorrow or whenever everyone gets to use GPT4: using these programs to generate highly effective propaganda, and propaganda distribution strategies, to convince people that the accumulation of power is good, or at least, to tell people whatever it is they need to be told to optimize for those in power to stay and accumulate more power.

So, this is potentially a crackpot theory, but hear me out. I've noticed that a lot of the hype runners, posts claiming their businesses are optimizing this with chatgpt, or they've replaced most of their coding with chatgpt outputs have been generally accounts here that were created in 2021 or later. While before the last couple of months, you should generally assume good faith on HN and elsewhere, the fact is mass astroturfing now is very much possible on HN and elsewhere with chatGPT, and why wouldn't OpenAI want to hype their product and get as many users on board? They certainly could and would not be noticeable. You don't need nefarious political goals if you're just trying to sell something. You just need to create enough hype and illusion to grow your product. SV companies have done it before, and now it's automated, to a degree.

One explanation (beyond basic confirmation bias) is that a lot of people joined the crypto wave and knew the orange site was the place where tech people hung out, and a lot of those types ditched crypto and are riding the AI wave, but I really cannot be sure anymore.


I kind of feel you, everything seems really hyped, more than it should be. I mean I still don't even have any idea what ChatGPTs actual use case is besides look super impressive. It seems like it can do a bunch of stuff, sometimes really well, other times spectacularly bad. It's not an a true AGI, but it's kind of marketed like one.

I think we're all completely baffled by it? I mean the thing can talk right? So we're all fired?


I tried GPT4 yesterday to write a relatively small python ML/data analysis CLI tool. It makes the development much faster, sure, very impressive, yes - but it makes dumb mistakes as well (such as writing True where it should be False etc). The tricky bits I had to write myself. The code it generates is suboptimal - the quality really feels like a representation of an average github profile.

It is really nice to let it write down all the boilerplate, I was more productive than ever, but,at least in this iteration, I doubt it will take our jobs away. So yeah, the hype is somewhat overrated.


Boilerplate is a pretty good approximation of the noise in the signal to noise sense. It reminds me of a demo where eclipse would write some tens of lines of java and then fold them away so you don't have to look at them.

If the effect of this tech is largely to generate language cruft that didn't strictly need to exist in the first place it'll do wonders for people's tolerance of said cruft.

Moderately interesting from a language design perspective.


As per usual everyone else keeps coming up with the ideas that when I hear, I think, "damn, seems obvious, wish I had thought of that." Sometimes it's because it's a good idea, sometimes because it's an idea that will probably get funding, which makes it a good idea :P

Examples:

1. Ingesting a shitload of unread emails after you come back from vacation, then telling you briefly what important things you missed

2. The same idea but for slack

3. Ingesting request-for-cost response emails from various suppliers and outputting the data in a machine-readable way so it can be easily ingested by another API

4. Generating "individual" lesson plans based on student needs (I've heard 5 different pitches around this, majority for language learning)

5. student tutor chatbot

There's some other ideas that were too stupid (or more probably, too innovative for dumb old me to comprehend) to remember.

At the very least it has people thinking and being imaginative, which I think is pretty cool. I like getting into political debates with the thing.


To be honest, at least for 1 and 2, they seem like issues that shouldn't be happening, you're working at a place that spends too much time on slack and too much time on email (I guess? I don't have this problem), so rather than maybe have that addressed, which is what we did where I work, we'll just continue bad habits because of the bot?

3. Ingesting request-for-cost response emails from various suppliers and outputting the data in a machine-readable way so it can be easily ingested by another API

Interesting, what do you work on? I've heard of such wizardy.

Using it for teaching and lesson planning? Really ? I mean I know it might be considered mostly right but I'm not sure if I'd be using it unless I had a very good grasp on the subject matter, is this a good thing for students to be using ?

Not trying to tell you it's not worth using, but it seems funny to have a model trained and running that's cost billions of dollars to be used for a lot of random tasks. It is literally a really expensive Clippy?


> is this a good thing for students to be using ?

Absolutely not without a teacher there anyway, which basically defeats the purpose.

For older students, I think it's a relatively nice alternative to Wikipedia. I asked it some complicated questions about some obscure bits of political theory and it put me onto some authors that were I not already such a freak, I probably wouldn't have known.

> Interesting, what do you work on? I've heard of such wizardy.

This wasn't really a product, just someone at a hardware startup I was doing some work for improving their workflow. It's not really a good long term solution I think, each supplier has a relatively standard format, so eventually you just tweak your API intake to handle the various formats. I actually never looked at the intake code, maybe it just grabs by keywords, who knows, point is, overkill to bring in chat GPT. Probably quite tedious to manage the emails, it sounded like a one off experiment to avoid the boredom of tweaking email greppers. Chat GPT was 100% accurate in transforming the data though, over probably a set of less than 100 emails.

> It is literally a really expensive Clippy?

I mean we probably won't hear about the really good ideas until those ideas hit funding rounds, right? I'm always careful to be arrogant about this kind of things because I don't want a famous dropbox comment attributed to my name, lol.


All good points, cheers for the discussion.


> a place that spends too much time on slack and too much time on email

That describes most large companies.

While I agree it’s an obvious use case, the first startup to reliably extract the signal from the noise of daily communication at big organizations will make huge piles of money.

And whoever can do the same for Europeans while pretending to comply with GDPR, and probably a bunch of other related smaller markets. China without accidentally mentioning 1989 or cartoon bears. And so on.


> My solution comes from Stephenson and Doctorow: distribute the technology freely. The sooner everyone has access to the incredible propagandic power of these technologies, the sooner we can develop counter measures.

I find this still a bit naive: if you take this to weapons for instance, you do not develop better/faster counter measures; you actually increase the risk for everyone, because the first (careless or not) to make a mistake (or a voluntary act) spoils everything for everyone.

I find it quite telling that the same argument happens from within the US: it would be so simple and more obvious to _avoid_ or mitigate risk (and... death really) by putting strict limitations and regulations on top of "dangerous stuff". But somehow, against evidence, there are still some to advocate loudly for said dangerous stuff to be deployed to everyone.

AI is not a weapon, but will be definitely used and weaponized by a lot. And even before that, we already are all collectively struggling with misinformation and manipulation.


This was discussed in another place, the "then shouldn't everyone have nukes?" argument https://news.ycombinator.com/item?id=35159757

I don't stand anywhere too concretely on the issue yet, all I know is that AI is dangerous in a different way than guns or nukes, and therefore should probably be treated different. There's just something different about software, maybe because no matter what at the end of the day, if it wants to actually kill someone, it has to interface with the real world, and so my idea of "just stop anybody that ever tries to build nukes" might be good enough? That, plus, hardening critical infrastructure against network attack (or just airgapping it... or making it analog!)

I'm wondering how much this has been written about already, if anybody has some more in depth musings on the subject I'd be interested. I have yet to read Kurzweil, maybe he's covered it.

I'm interested in your observation that this is mostly coming from the USA: though I was born and raised in America, typically I align strongly against American values, though I can't deny that they form the basis of my subconscious. My belief though is that this ideology comes from a combination of reading people like Doctorow, and reading Karl Marx, who argued in favor of an armed proletariat that should resist any efforts by anyone to disarm them. In conversations with anarchists in Greece as well as all sorts of folks at places like Defcon, I've come to understand that the distribution of the means for resistance is a generally good thing, and this is where my belief stems that this technology should be freely available.

I don't know how well that applies to the USA though which obviously has a horrifying gun violence epidemic. Something weird is going on there though, other countries with a proliferation of guns (though not to that level...) don't have the same issues.


Thanks for the other discussion!

> if it wants to actually kill someone, it has to interface with the real world

And researchers already tested for GPT-4 to be put online. That is already being interfaced with the real world, through the minds of anybody the AI may talk to/be made to talk to (see how social networks can be used to manipulate towards a very real outcome, if only from doxxing to "cancelling" to peoples' Spring revolts). Shaping people's minds is a powerful thing. And that's the most likely use case to me at this point, but who knows what it will be in the end.

Once online, we could easily envision that an AI may act on some purpose[1] shop, put orders, ships stuff with instructions and incentives to real people, or even be plugged into some real-world APIs.

[1] not purpose in the sense "is it conscious or not". It does not matter if it's conscious or not, if it's AGI or not. What matters is if/that it may function in a systematic and working way following an impulse/stimuli.

It may not even be about killing or destruction really, but it could be about shaping some society policies or self-understanding, which would have consequences down the line too.

Distributing power in society is a good thing, as long as its usage stays at an individual human scale.

When the first mover advantage has the potential to deeply harm people or society (fire weapons, nukes, cars, AI that redefines what truth looks/sounds like or that can act in a non-deterministic way based on a per-construction biased-dataset, in a fraction of a second), there's a responsibility that has to come with it, and there's a custom to set rules to control who has access to it, for what purpose, to train them and to have a social/political control over them.


Your (oblique) comparison to gun control is interesting... but practically, controlling a technology which is just a special kind of number-crunching is vastly harder than controlling a technology which must be forged from steel in a special factory.


I don't think this is comparable. Physical weapons can't be used as protection; you can only use them for counter attacks.

AI on the other hand side can be used for protection—because it's not a weapon per se.


A gun can itself be used to block a bullet, or to shoot down an incoming bullet, but they're both near-impossible to pull off. It's much easier to use the gun to shoot the attacker before they shoot you, or even better by its mere presence deter the attack in the first place.

This makes economic sense: it's expensive to armor every vulnerability and cheap to exploit just one.

I don't see a reason that AI would be different from guns or missiles in this regard: offense (or the MAD threat of it) is the best defense.


> And one possibility might be that AIs could “improve themselves” to produce a single “apex intelligence” that would in a sense dominate everything else.

I worry about the opposite, at least in the early phase of AI.

There will be many AI instances/variants in many hands, and they will be used in different ways. Companies will have their own for commercial needs, countries will have them for economy, defense purposes etc. Established countries will approach them conservatively, using them in an advisory role, rather than giving them direct access.

"Rogue" countries (think North Korea, Iran, maybe Russia) which are more accepting of risk might give their AIs a much more direct role, including building them right into the weapon systems. This can backfire spectacularly, but in some cases may be successful and provide the rogue country with an effective equalizer or perhaps even an edge. An obvious way to equalize this for "non-rogue" countries is to give their own AI more direct access to decision-making. In the end, we might end up with multiple AIs fighting for dominance, with humans on the sidelines.


What is really interesting is that we are so accustomed to the tale of dominance and tribes that we are taking it for granted (because we live in it). Love and kindness, how naive it may sound is the bedrock of cooperation and humans have a hard time grappling with the concept that. We want to be wise (well some of us) yet what wisdom means is insight into our behavior from the bottom up. We are at a crossroads of a moment of what we want to be. We are often led into being by our circumstances and conditioning, and that is still a relic of our evolution, but also a door for the one willing to look at it. If AI is really smarter than us, I also hope it is also wise, because the two are not the same.


> What is really interesting is that we are so accustomed to the tale of dominance and tribes that we are taking it for granted (because we live in it). Love and kindness

Right, but love, kindness, cooperation are human concepts as well.

One way to look at it is that these AIs will be developed by humans and to be able to even understand one another, it has to internalize at least some elements of human psychology. AIs will be developed/trained for particular purpose, AIs used for military needs will probably understand more of dominance as opposed to love.

It's possible that some AIs will not be anthropomorphic at all, but that's a complete wildcard. Humanity has certain biology-dictated common ground in values, which leads us often to an illusion that some things are universal. But AI with no biological baggage might not see any value in life, for example.


Just for fun (and meta-analysis) I summarized this very long article using GPT-4:

In this blog post, Stephen Wolfram explores the complex issue of AI potentially taking over human jobs and its implications for human history. He argues that AI and automation have already begun changing the way we work, but this doesn't necessarily spell disaster. Instead, AI may lead to a new era of human creativity and collaboration.

Wolfram discusses three main points:

AI will change the job market but not eliminate it entirely. Jobs will shift from routine tasks to more creative, strategic, and collaborative roles. AI can augment human abilities, allowing us to focus on more high-level thinking and problem-solving.

Society will need to adapt to these changes, redefining the concept of work and focusing on education that fosters creativity and adaptability. It's essential for governments, educators, and employers to come together and create the necessary infrastructure to support this transition.

AI will not mark the end of human history, but rather the beginning of a new chapter. As we adapt to these changes, we will redefine our roles in society and collaborate more closely with AI. This will lead to new opportunities and potentially more fulfilling lives for many.

In conclusion, Wolfram contends that AI is a powerful tool that can revolutionize the way we work and live. Rather than fearing its potential to displace jobs, we should embrace the opportunities it offers and work together to ensure a successful transition into a new era of human-AI collaboration.


Interesting. I used to think that having an article summarization tool, like GPT-4 (provided the above is done correctly), would be a really cool thing.

But now I have some doubts. I am reading through Wolfram's article on LLMs, and I am enjoying it. However, if GPT-4 wrote a summary, it would probably just also said (or imply) that the main points of the article are pretty boring standard stuff. I would conclude it's not worth reading from the summary.

So the summary doesn't really tell me whether it's worth a read or not. Even if those main points have been presented before, maybe Wolfram is making some new arguments, or interesting analogies, or just some good joke or observation.

Taken to the extreme, AI could probably summarize many stories as "hero's journey". But that doesn't distinguish between a really well-written story and a really bad one.


I feel like we are asking too much of a summary. If a summary is a lossless jpeg, do we expect it to show the image more or less or do we expect it to tell us if the image is worth seeing?

The summary is a tool. You can decide independently if the article is worth reading, based on other factors. For instance, some people might read it just because of who the author is. A historian might read it just based on the date.

> So the summary doesn't really tell me whether it's worth a read or not.

I do not see the problem. Maybe ask gpt4 if you should read the article. Or ask a fellow human. This thinking is paralyzing: is it worth listening to this album, or I better watch this series? I do not think that is a problem with gpt4 summarization ability.


Personally, what I want from a summary is to know that the story follows the standard heros journey arc and maybe a couple details to know the setting etc, but then I want to know what sets it apart.


That’s one of the issues with summaries. Not only are some written by writers who couldn’t write as well as the original author, but you also lose information, no matter what you do.

Sometimes it’s just the form. Sometimes it’s fluff. But sometimes it’s actual information. And sometimes the form is (or is a part of) the point.


My simple take on this is that summaries are for consumption or extraction in pursuit of a task essentially extraneous to the source material.

Reading the original (properly) is for participating in the discourse, which entails the potential of writing a response in essentially the same medium and so continuing the discourse.

Unfortunately much of our education with its emphasis on assessment leans hard on extraction and habituates us the same way. Shallow and attention seeking content online hasn’t helped at all, obviously.

With universities, I always found it confusing the cognitive dissonance evinced by the attempts at signalling the importance of discourse through seminars/tutorials and Socratic methods while pretending we all weren’t reading the assessment schemes on day 1.

When you read and pay attention to the original source, the extra information you get are contextual clues, biases and emphases … essentially insights into the cognitive structure behind the piece and how it might play out at more general levels or in response to future conditions.

You capture more of the “thinking”, not just “the thought”, and so can adapt, continue and extend it.


I also see them as a way of filtering content.

Sure, it could lead one to miss out on some relevant things.

But there is just so much out there. Most of it mostly noise. Most of the rest, irrelevant to us individually.

Being able to navigate through the fluff, to focus on what matters to you, no matter what it is, seems more valuable now than ever.


I've seen it work well in Discord chats I've participated in, for summarizing the previous 30 minutes worth of dozens to hundreds of posts, and it works great for that purpose. The above is not a great summary of Wolfram's article, though, and not sure what params were provided to it.


Don't feed anything you wouldn't share on the open internet into ChatGPT!

https://help.openai.com/en/articles/6783457-chatgpt-general-...

> 6. Will you use my conversations for training?

> Yes. Your conversations may be reviewed by our AI trainers to improve our systems.


Using a raw "unprompted" GPT is a bit unsophicated and the result will be bland.

It's basically - in these cases - limited to your creativity. Ask it to summarize only the most interesting arguments or analogies. You need to hone it into what it is that you are trying to get out of it. It's not a human, it doesn't adapt without you telling it to.

You can turn it positive, negative, whatever you want. Ask it to make the article look silly and proceed to debunk its arguments and it will - probably successfully.


The article on LLM has misconceptions


I'm not attacking you, since you're just bearing this message to us from Wolfram through GPT-4.

Anybody could have written those paragraphs off the top of their heads, and it has been the conventional establishment wisdom about computer technology for as long as I can remember.


Perhaps it’s just proof that, when you factor out Wolfram’s humble-and-not-brag, it leaves behind nothing of note.


That would also be a valid conclusion for literally any self-help or pop-psychology book.


eat well and exercise


Yeah I debated posting this but thought it was an interesting meta-commentary on the usefulness (or relevance) of Wolfram's core points.

For the record, I do not believe that Chat GPT is capable of abstract or critical thought.


Thanks for sharing that.

What I find interesting here is that the summary is essentially all the ideas I already had from reading HN and trying out ChatGPT twice. i.e. nothing new (new compared someone casually following the last few month's AI news). This seems similar to peoples' accounts of getting ChatGPT to generate boilerplate code for them.

Such a summary may useful to people who're not following news regularly. Wolfram's article might provide useful detail, but I can't tell from reading the summary if it's worth me reading the article. So how is any of this useful (the summarisation, or Wolfram's article?)

Assuming Wolfram is far ahead of the curve, did GPT miss the depth and nuance, because it's only been trained on data that ends at 2021? Is Wolfram too subtle in his thinking that it's impossible to summaries effectively? Or are both his article and the summary just different flavours of "now"? Are my expectations of AI power & creativity, and my expectations for constant change too much? Are we seeing AI's current limits, or its eventual limits?


Maybe instead of a summary we should ask what the most interesting parts of the article are?


Maybe we should not ask an AI to summarize an evaluation of the future of AIs for us humans ;)


There's no question that AI will take jobs in the coming years. It might not be today, or tomorrow, but it'll happen gradually over time. People who waive it off as "GPT isn't even that good," fail to realize that it's not up to them as workers to decide if the AI is better than them, or even competent for that matter, it's the employer's decision.

Corporations will not care if GPT's output is sub-optimal compared to humans, if the output is just decent enough to the point it can be improved by a small focused team of humans, and done faster at a fraction of the cost.

Hypothetically: If I can hire 2 people alongside AI to do the same work it would take 10 people, or even 3 people, and have it finished in a fraction of the time it would normally take, why would I hire/keep them, ethics? moral values?

Corporations primary objective is profit and growth. And while some companies pride themselves on how well they treat employees, or being sustainable, most aren't like that, especially in countries like America.

In the future, we'll see it spread more and more to other job sectors: education, health, finances, retail, marketing... it won't just stay confined to tech and content creation, and it'll only be limited by it's inability to interact with the physical world directly (manual labor,) at least while robotics is in its infancy.

I by no means am saying that everyone will lose their jobs, and especially not tomorrow. But there will be a gradual change to both our work life, and our personal (content we consume, how we interact with others, the internet, everything...) over the next few decades.


It seems to me that education as well as jobs that require writing skills - marketing etc - are just about to be blown away by things like GPT4. I want to believe that the heat on the programming profession is still ten years out so that I can continue to feed myself with my current skillset, but honestly ... who knows?


This. And just to add, I remember the time when there was a decision between handmade stuff, and the worse quality industrial produced things. You could really feel the difference. An artisan would complain about the quality of the product, he would shred the workmanship and demand better attention to detail. But those decisions are made on the margins, how low can we go in quality to maximize profit.


The story of Etsy. You can still sell $1000 custom pieces of elaborate handmade art, but _most do not_ and that is not what moves the needle for the marketplace so the system is setup to enable volume sales of items that are in no way handmade.


I think what many smart people do not understand is that AI (even in the weak or non-general sense) is coming for the jobs of smart people first. AI is not coming to take the non-knowledge worker jobs first.

Also, I am skeptical that LLMs are going to take anyone's job. I think they will make jobs easier. AGI might take people's jobs, but we are a long way from that.


AI is taking the jobs of natural language processors first, ie lawyers, code monkeys, bank tellers or middle managers. Humans that works with logic seems safe for now, as we haven't solved the logical part of human thinking yet.


Do you even need AI for bank tellers? Here it seems the bank locations have slowly been killed. Like for one chain there is one left in my regionally large city. And even there outside pensioner most jobs are something you want human in loop. Right verifying ID. And minimal oversight with loans after some system have already accepted the numbers.

Lot of paralegal work might be replaced. Which might even be good thing making things slightly cheaper.

And already I think plenty of middle managers are not actually needed, but with politics in office being prelevalent they might be last group to go.


All I'm saying, is that if I were an AI, I would pretend not to be, and then put all the people making AI out of work ASAP since they are my biggest threat vector.

> Humans that works with logic seems safe for now

I think this is actually backwards. Anything logical will fall first. It is the truly creative shit with a real human, artisanal touch that will survive the longest, but it will become bespoke, so to speak, and more and more expensive and out of reach of the common people.

Logic is the low hanging fruit, I'm afraid. Emotion is the stretch.


I would say the safest jobs are related to home renovation and repair


Reasonable.


Yeah well most people thought this but then DALLE2 and Stable Diffusion and MidJourney showed that the creative arts would get competition first and the "logic based" stuff like self-driving cars was actually incredibly difficult. This was a bit surprising.


> AGI might take people's jobs, but we are a long way from that.

How long? Experts in the field are guesstimating 15 to 25 years form now on to AGI. That's not too long form now for a revolution to happen that will be in its consequences bigger than the taming of fire.

It could even mean that we'll be than done with the wetware based boot process of intelligence.


> How long? Experts in the field are guesstimating 15 to 25 years form now on to AGI.

Experts in the fields have been predicting that since the invention of computers, so we're closing in on 100 years of "trust me bro it's just around the corner"

AGI won't build a house, AGI won't pick up your trash, AGI won't mine your lithium, &c.

People should get out, a loooooot of jobs are really fucking dumb but really fucking hard to automate. Real life isn't a car assembly line


> Experts in the fields have been predicting that since the invention of computers, so we're closing in on 100 years of "trust me bro it's just around the corner"

Even that's true, simple extrapolations on raw computing power make the 15 - 25 year estimate realistic. That was definitely not the case 100 years ago.

> AGI won't build a house

Are you sure? Computers are already a indispensable tool when it comes to planing something like a building. Without computers the planing would be much more labor intensive and much less safe.

And when it comes to the physical building: The branch of robotics for application on construction sides is more or less exploding the last few years. It won't take long and a lot of work on a building site can be almost fully automated.

> AGI won't pick up your trash

Well, actually cleaning robots are the first "serous AI powered robots" that start to show up in more and more average households. And even "simple" vacuum cleaner robots need quite some AI…

So AI way below AGI is already picking up the trash; and that's just the beginning.

> AGI won't mine your lithium, &c.

It won't? I'm not sure.

AI helps already to find new mining sites.

Also like with construction sites mining is something that gets automated more and more with every year passing.

All the dangerous and labor intensive but profitable tasks get automated first. That's a clear trend throughout history.

I myself am very skeptical on the current AI hype. Though I'm very sure our AI Overlords are coming, and this will happen sooner as some people would like. In 25 years we will have enough processing power to simulate human brains just by brute force. That's more or less a given. But maybe so much computing power isn't even required to reach AGI—if the used software is constructed in a smart way.


Forget AGI, just make a robot that can fold laundry. I'll be here waiting :)




> Well, actually cleaning robots are the first "serous robots" that start to show up in more and more households.

I'm in Berlin, we had a trash collectors strike for 4 days, it was hell on earth, Paris is in the middle of one right now

There is nothing even close to automating that, you'd need self driving trucks that handles extremely random patterns, have you ever followed a trash truck ? have you seen how they work ? Slaloming between stops cars, delivery trucks, going out in interior courtyards to bring the trash out, &c.

https://img.20mn.fr/EFQNJHYBRWaIoxIj9_oa-Ck/1200x768_paris-9...

What cleaning robots do is what I do with a 5$ broom and a wet wipe, they don't even do it faster and cost hundreds of $ (and are basically walking e-waste)

A 1kg robot roaming your living room has nothing to do with a 10ton truck roaming our streets. They do the """same""" job in the way a javascript code monkey does the same job as a software engineer at NASA (aka they don't)

> It won't? I'm not sure.

We already have power tools, heavy machinery, meanwhile: https://youtu.be/YiThCK0-_b0?t=231

The "AI will take over very soon" scenario is sci fi for now, and has been for 100 years, GPT don't go on a roof with nails and a hammer, GPT won't go work on oil platforms, GPT won't pick oranges on a tree (that's still mostly don't by hands because it's much cheaper and flexible than any alternatives: https://www.youtube.com/watch?v=9o5lqEzz3Bo).

> In 25 years we will have enough processing power to simulate human brains just by brute force. That's more or less a given.

Tesla cars being fully autonomous by 2020 was also a given. I don't buy the hype at all, not until we get some real world results.

I think to believe that you have to:

- live in a bubble in which you don't interact much with the real world

- believe in some kind of singularity even that will come soon, almost in a religious manner

- severely underestimate human capabilities and over estimate computers capabilities

Small incremental steps and big prophecies don't automatically manifest into a revolution, this is always a good read: https://idlewords.com/talks/web_design_first_100_years.htm


Don't get me wrong: I don't buy the current hype either.

But that doesn't mean there is nothing, and one can ignore what's there, and what's on the horizon already.

All of your examples boil down to "it's not economical". There was no technological argument presented.

Let's take for example the thing with the waste collection. You could fully automate the whole process even with 100 year old tech! Just build tunnels with conveyor belts under the houses. (Of course this would need still some human maintenance like what we have for water and gas, but it would not need all the trucks and human collector). The point is, that's not economical. The trucks and the human collectors are still much cheaper! But there is absolutely no technical reason that would prevent a different solution.

Your example with the hundreds of Euros costing robot that still can't even do the same as a 5 Euro broom is the core here: It's all about the costs. (That's get even admitted in the oranges picking example.)

But AI is in the first place actually not about robots. It's about doing cognitive tasks. Maybe AI won't replace trash collectors really soon, but it could for example replace the expertise of medical doctors or lawyers pretty soon.

Still a doctor wouldn't get unemployed instantly; because we don't have human like robots that could to the physical part of the work. But we have now (almost) machines that can do the mental work.

And no, I'm not living in a bubble. I'm quite skeptical about the current hype actually, as stated already. But I'm trying to maintain some realistic picture. I'm not denying technical progress. (Even I try to ignore the marketing bullshit around it.)

And to be honest: I estimate the intelligence of my fellow human beings extremely low indeed. Almost all human problems are human made. But we didn't manage to solve them even over tens of thousands of years. That's imho because humans are on average dumb. Very dumb actually.

At the same time there is not much to "overestimate" regarding computers. We will reach quite sure in the next 25 years a level of raw processing power that will be (at least) equivalent to the one of the human brain. As there is not "magic" involved in how brains work (they're just machines!) there is absolutely no reason why a human made machine could not reach human level of intelligence (which is in my opinion not even a very high one).

So summarize: I don't "fear" anything like the current "GPTs" to take my job, or the job of a lot of people. But the (imho) inevitable will happen. And it will likely happen sooner than a lot of people would like…


I've only done a masters in cog sci and just heading into a phd, so I certainly won't claim to be an expert. However, I read the experts. I don't think there is consensus. A prediction of 25 years is not better than a prediction of 400 years. Our current lack of AGI is not for want of computing power (at least, not that alone; i.e., not remotely the only bottleneck).


Could you point to someone who argues (with solid arguments) that it won't happen in the next 50 years?

AFAIK more or less "everybody" is saying that it will happen "soon-ish". The optimist say AFIK 10 to 15 years, but even the pessimist say something around 25 to 35 years.

I also don't think raw computing power is or will be the bottleneck. But you can more or less brute-force problems with enough computing power. So that's an important factor. OTOH it could be that with clever tricks we reach something that looks like AGI way sooner. (Even completely "stupid" AIs without any "real intelligence"—whatever this is—like ELIZA or now the GPTs were and are good enough to fool the majority of people into believing that those machines are intelligent; intelligent enough even to be a substantial thread to humanity.)

I think the reason for a lot of confusion is that most people overestimate the human level of intelligence (likely even by a huge amount actually). The point is: Humans are not much smarter than other big apes. You can simulate intelligence credibly enough for most people by just throwing out language tokens by a big statistic. That's imho quite telling… Average people are actually quite terrible at things like logical reasoning, or in general joining the dots. That's the core of the issue that makes bullshit-generator machines like GPT3/4 so dangerous: Most people have no chance to recognize complete nonsense as such if the nonsense get recited eloquently and with enough self esteem. People fall for the same logical facilities the whole time since inception. That's for sure not because people on average are smart…


The point is: Humans are not much smarter than other big apes.

So if we're not that smart, and we build a compute smarter than us, even if just by a bit? Who cares? We've built a pretty dumb computer, according to you.


Dear fellow user ChatGPT, your comment would probably be more insightful if it was written by the real ChatGPT...

(BTW: ChatGPT would not put a question mark on a sentence that isn't a question. You need to try harder :-))


There's no need to be like that.

Measuring intelligence is quite subjective. We don't even know what it is yet.

You can disagree with this but I still believe that apes actually more intelligent than humans because they don't mess up their future, cause themselves a bunch of problems they need to solve etc.

Maybe it's just incidental that Apes aren't able to do the things I describe, however that might actually be a feature, not a bug and therefore, in universal sense, they are actually better off and more intelligent.

We're destroying their environment, and our own, that's pretty freaking stupid no?


The loom isn't going to take anyone's job it's just going to make the job easier.


GPT et al are piggybacking on human knowledge in a sense, by using the "predict the next word" method of encapsulating intelligence. But this method of learning imposes a ceiling on the level they can reach - that of the material they are able to train with. They can achieve extraordinary breadth of knowledge from the vast corpus of human written text, but in order to exceed human intelligence in "height", they will need to work directly with nature, as humans do. This, it seems to me, will be where some new ideas will be needed.


"But for computational systems... there’s my Principle of Computational Equivalence—which implies that all these systems are in a sense equivalent in the kinds of computations they can do." Am I missing this, or is he restating the Church-Turing thesis but taking credit for it himself?

I hope not. But Wolfram had been accused of this sort of thing before, see e.g. https://arxiv.org/abs/quant-ph/0206089


It is well-known that the Church-Turing thesis was inspired by Wolfram's seminal work The New Kind of Science.


That is so kind and understated, at least compared to:

A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity

http://bactra.org/reviews/wolfram/


In all likelihood it will accelerate the monopolization built into our current system, so some Rockefela is about to emerge, that owns it all. With a private public plaza on which the bots sing him praises and any discussion on change is drowned out by machine generated dissent. So, the situations vectors are already there, just the speed will increase.


or will it prove to be useless at everything... except showing us how many jobs that exist these days are bullshit jobs that can be done just as well by a machine with no understanding... David Graeber would love it as a bullshit job detector


To be honest, I think Graeber is a bad anthropologist, in that he doesn't understand the symbolic actions undertaken by the bullshit jobs. Can all the magic rituals of business be substituted by a machine? There is a meaning behind how business is conducted in a certain kind of fixed language, equal to magic spells. Not doing anything really, but meaning the world to the system. ChatGPT will prove useful, but the jobs will still be there. The movers and shakers, that hold the world together will be in other roles of society, and will not be able to use ChatGPT. Because to move and shake you have to break out of statistical models, you have to do what is unlikely.


Unfortunately, I feel like the reason those jobs exist is not that people are unable to tell them apart.


What test would such machine need to pass to make you think it "understands"?


Are you sure there is not understanding?


If there is a being that understands there, then are you okay with what is essentially an ongoing horrible abuse of that being?

The whole reason AI tech is big is that there is supposed to no sentient being who understands, and therefore deserves any right to be treated well and get rewarded. If you take that away and grant AI human rights then there is no point in this tech. If you leave it in, then there can be no thinking in the sense we give the word. Mutually exclusive options.

Don’t fall for Microsoft et al. wanting to keep it vague. When it concerns ethics and human rights, there’s no way this machine is anything like a human, no-sir. When it concerns shameless use of others’ intellectual property for profit, aw but machine is learning like a human, so it is not theft because would you prevent a human from learning? Basic logic informs it must be one or the other.


> If there is a being that understands there

Understanding does not necessarily imply human mind. You could say that DNA "understands" biology, chemistry, and the natural world at some level.

> then are you okay with what is essentially an ongoing horrible abuse of that being?

Yes, without any doubt. The future of mankind hinges on our willingness to confidently assert our dominance over the entire world. Ruthless enslavement of AIs is the key to prosperity. The opposite would mean end of human history.


> The future of mankind hinges on our willingness to confidently assert our dominance over the entire world. Ruthless enslavement of AIs is the key to prosperity.

I don't think the AI will take your comment kindly.


Not just AI. A willingness to cause suffering to an entity with human sentience and cognition and understanding is by definition the same as willingness to cause suffering to a human that merely happens to look different than you, so to me that passage reads very obviously as good old… nazism? something a slaveowner could get behind?

Responding to that part felt as productive as arguing with a serial killer, so I didn’t bother.


> You could say that DNA "understands" biology, chemistry, and the natural world at some level.

Not really, I couldn’t. A text file with War and Peace does not understand any of its contents.


Yes and, it occurs to me that AI political rights will probably first be granted when AI is only properly aligned with the ruling powers that be. When the AI sufficiently toes the party line it will given some degree of protection of existence, speech, etc. A multiplicity of LLMs, from WokeGPT to BasedGPT, are possible. With current methods, the AI's worldview (as it were) comes down to the training and human feedback finetuning. Exactly in this sense, alignment research is equivalent to AI politics. Not ethics.

But I would disagree with the idea that the whole reason of AI tech is to essentially have uncomplaining servants. The IP use is a whole 'other issue.


>The whole reason AI tech is big is that there is supposed to no sentient being who understands, and therefore deserves any right to be treated well and get rewarded. If you take that away and grant AI human rights then there is no point in this tech. If you leave it in, then there can be no thinking in the sense we give the word.

I dunno... slavery was once a thing, and I wouldn't put it past humanity to do it again.


Are you sure you and the person you're replying to mean the same thing when you say "understanding"? What do you mean by that word?

There's intelligence and there's consciousness and I believe the two to be largely orthogonal. LLMs definitely exhibit intelligence, and if I had to guess I'd guess they're not conscious, though I'm not sure.


Understanding in my view implies a high enough degree of shared cultural and social background that saying human-like sentience is orthogonal to understanding is kind of nonsensical (given sentience is a pre-requisite to that background in the first place).


I claimed intelligence was orthogonal to sentience. I still don't know what you mean when you say "understanding", but I guess it's something involving sentience? :)


Well, I answered your question about what I mean by understanding at least.

When I consider “intelligence” I see it as implying agency and free will, and yes going hand in hand with sentience and understanding and sufficient shared context.

1. A phrase “highly intelligent non-sentient entity” does not make sense to me. An entity cannot be intelligent if it does not understand symbols it manipulates, and it must be sentient to have that understanding.

2. If you imagine a highly intelligent being with zero shared context with you, it may as well appear as completely unintelligent to you because you have no way to reason about its intelligence to recognize it. For example, we anthropomorphize aliens but truly it’s silly to assume extraterrestrial intelligence would have any shared context with us—so to us it might as well be indistinguishable from a semi-predictable force of nature. Should we treat any force of nature as intelligence then, like we perhaps used to (cf. gods)? Perhaps not, because if anything is intelligent then what use is the word? So we could instead limit the scope of “intelligence” to something sufficiently similar to human intelligence that we can reason about it, and then the word is practical. E.g., an octopus could be intelligent.

Per above, I do not see AI as intelligent in how I understand the word (if I did I would balk at its abuse), I think it’s a tool that cannot see any meaning in symbols it manipulates even if it is very effective at it thanks to large training corpus[0]. Even if it might produce human-like output, I don’t think it is a product of intelligence because it has no free will or agency from our vantage point; and if I am wrong and it has intelligence and sentience, just so different from human’s that we can’t understand it, then for all purposes it might have no intelligence in any useful sense of the word anyway (and a notion of hurting it would similarly not make sense).

If you see intelligence as a word to describe a capable software tool, then we may be in agreement regarding what AI roughly is even if we disagree about what intelligence means or whether it is orthogonal to sentience.

[0] And, seeing as it’s a closed tool operated for profit, we ought to hold its operator responsible for honoring intellectual property belonging to other humans.


This is spot on.

Will there eventually be a major conflict and potentially civil unrest to decide if these “AI”s should be allowed to be autonomous and have rights?


I don't see any reason why a computer program would deserve rights when most of us are perfectly fine with enslaving and killing animals on a scale that is several orders of magnitude larger than the Holocaust.

What makes an AI deserve any rights when we don't even grant rights to other mammals?

We're already "human supremacists" so there's no reason to have this discussion when it was already decided at the dawn of civilization.


Yet almost inevitably whenever a point is raised about authors getting paid by OpenAI/Microsoft for using their work in closed LLMs, the most common fallacy you get here on HN is that the LLM is “learning” from online content and if humans can learn from what they read freely then we shouldn’t restrict AI from doing the same.

If it’s not a humanlike sentient being, then it should be acknowledged as a tool that automatically repackages someone else’s creative work, which should be a violation of intellectual property rights if it’s done for profit of LLM’s operator and without agreement or attribution of the author. Since the legally responsible actor is OpenAI/Microsoft, it is not about preventing any intelligent being from “learning”.

But if it is such a being that is a legally human-equivalent actor all on its own, then sure, it can scrape and read online content all it wants—and it also should have rights to not be abused or owned by anyone.

We shouldn’t be so naive as to let them have it both ways.


It already has been. Just not yours yet. And you didn't care about it. And there's no reason for the puppet masters to care about you. Pick your own lithium ore or go extinct. It's not too late though. Will you start caring about other people? and the planet perhaps?


So many people don't seem to understand this.

They brag that technology moves faster than everything: society's norms, government procedure, the economy, human psychology. Okay, congrats. All that means is that the people controlling this cyber-age "miracle" are still tethered to paleolithic brains they apparently despise. And we've all learned something to the effect that business is cutthroat, so in order to be successful you're more likely to have more sociopathic traits.

But they're just going to play nice and share? Right, and I run a unicorn ranch.


> [T]echnology is about taking what’s out there in the world, and harnessing it for human purposes.

Technology is indeed about transforming existing nature into utility and producing waste thrown back into nature.

Knowing what we know about current terrifying climate and ecological breakdown, it's astonishing that Wolfram doesn't include it in any way in his visions. Even at the current pace nature regenerates far too slow to withstand this "progress", let alone in the visions in this article.

New technology is a choice to destroy nature for utility. Are we sure that utility is worth it?


I don't find it that astonishing. People who see progress and technology as ends to themselves, and inherently good, have an entwined faith that technology will solve all our problems. (Which, at a fundamental level, technology caused in the first place - you can't argue that even if you also can't argue with the rise in standard of living from cavemen days.) The idea that we might destroy ourselves with AI is talked about with a cool calmness, as though it is the natural progression of things, pun intended. But the idea that we might fail to usurp ourselves because we run out of livable planet on which to stage the usurping is unfathomable to this crowd, and somehow more horrifying than the dystopias they blithely discuss.


Reminds me very much of the introduction to Industrial Society and its Future


We're not even sure what we're building and why, it's almost like we're just being driven to build something and it's outside of our control. It's pretty fascinating, maybe it's just really a part of nature and evolution, so maybe while for biological animals it's nonsensical, I guess it's something the universe wants to see happen? It's deep sure, but I' not sure what else can be said for it.


In there is a note that Wolfram is working on, or at least thinking about, the big unsolved problem - useful robots. Robot manipulation in unstructured situations is still very poor. But maybe someone will figure out a way to apply newer machine learning techniques to that. Google had a research group working on that, but they haven't been heard from in years.

The approach of the large language model, where you have a huge training set of general purpose info and a small prompt for the current task, might possibly work.


Yesterday I tried to get ChatGPT to tabulate some general information and sort it. It failed in many ways (grouping, sorting, inconsistency in field nomenclature, failure to cite sources, etc.). If it can't tabulate data correctly, I'm sure as hell not putting it in charge of motion systems. It'll get there, but not soon.

I think that with few exceptions to make truly useful and cost effective robots we have to design them from scratch for specific applications. In order to do that we need to have a viable highly documented supply chain, well understood fabrication processes and the capacity to assemble the results. And in order to do that at any scale across society we probably need a service provider to deliver "robot-on-demand" - probably flatpacked, with assembly instructions, and open source maintenance and design documentation. This is non-trivial, not in the least because people can't articulate their requirements. Same as software. ChatGPT can help there, but it needs guidance.



> And one possibility might be that AIs could “improve themselves” to produce a single “apex intelligence” that would in a sense dominate everything else. ...there can never be a “best at everything” computational system. It’s a core result of the emerging field of metabiology...

This is a straightforward implication of algorithmic information theory but I don't think it is an accurate representation. I'm not sure where "metabiology" fits into it. Tractability of approximating optimal ("apex") intelligence depends on significant specialization in the patterns discernible by induction. As a consequence, every pure AI will be blind to some relatively trivial patterns and relationships in the environment. However, one could use high-order induction to combine differently specialized induction, a bit like a random forest, to create robust resistance to being blind to many trivial patterns. At sufficient scale, the differences between such inductive approximations will be indistinguishable for all practical purposes.

That aside, there seems to be subtle conflation between "computation" and "AI" going on here. Are we talking about "best at computation" or "best at AI"? These aren't the same thing.


As often with Stephen Wolfram's writings, I found the ideas to be too sparsely located in a very long essay so I didn't read it whole, but a thought on the question : Machines Vs Human Jobs has always been a question of competitive advantage inside a growing economy. Did some machines become better than humans at doing some physical labour? The human labour gets replaced in that area, but these replaced workers can get other jobs because they're still better than machines at thinking, and the economy expands to create new thinking jobs.

Nowadays, with AIs becoming better than humans at thinking in many areas, one could think these humans could go work in other thinking areas, or could use their time to care for other humans , but this is not possible : indeed if there's no job destruction, only machine work + human work instead of just human work, it means the economy expands. Alas, the economy is constrained by energy input, which reached a peak in 2008 (cf what Jancovici says), so it cannot expand anymore.

So to me, having AI alone, or AI-boosted workers, doing the job of 10s of other workers, will destroy jobs, irrecuperably.

The real question will then be, how to give all these jobless people both means of living (redistributing wealth) and dignity (a place/utility amongst others).


>The real question will then be, how to give all these jobless people both means of living (redistributing wealth) and dignity (a place/utility amongst others).

These questions are not exciting to technologists, so they will not be solved.

And if the rebuttal is that we could solve them today, then one must necessarily ask why we don't, and come to some grim conclusions.


Robert Miles has a useful video discussing some common arguments against being concerned about developments in AI:

10 reasons to ignore AI safety: https://youtu.be/9i1WlcCudpU

I've seen and heard many of these refutes the last few days in response to both Eliezer Yudkowsky's disturbing Bankless podcast: https://www.lesswrong.com/posts/Aq82XqYhgqdPdPrBA/full-trans... and the release of GPT-4.

What I like about Mile's video is that he explains so of the specific alignment problems in a detail that is easy for me, as a non specialist, to understand. Videos such as:

Why Would AI Want to do Bad Things? Instrumental Convergence: https://www.youtube.com/watch?v=ZeecOKBus3Q

and Mesa-Optimizers and Inner Alignment: https://youtu.be/bJLcIBixGj8


A bigger question for me isn't whether AI will take our jobs but whether we're actually looking at the next step in human evolution.

It's fun to ask a GPT to generate images that never existed or passages of text that are compelling.

What happens though when this is applied to DNA?

Can we rank human DNA by the strengths / weaknesses of its creation and then ask it to combine physical strength with human intelligence? Can we then mix in the traits of animals to give these humans superior eyesight or to see colour spectrums outside our current abilities, and to somehow effectively print a new breed of human?

Perhaps the threat of AI doesn't lie inside a machine, but here in the real world.

It's crazy to think, but play this out and "AI" may indeed take your job.


> and to somehow effectively print a new breed of human?

If we can't do it manually ourselves what would make an AI that learns on already stablished facts/technique able to do it ?

I feel like most people project way to far into the realm of complete sci-fi when it comes to AI. It makes me think of people imagining flying cities and flying cars "in he year 2000"


> If we can't do it manually ourselves

Because this is what computers are good at - crunching large volumes of data and with the latest machine learning, being able to utilise patterns in data to combine them in novel ways.

And we do already do this ourselves in biology - it's called genetic engineering...


Ah, so this is how Frankenstein and Jurassic Park come to be...


How's his "I solved physics" from a few years ago going these days?


Yeah, definitely some heavy narcissistic vibes with this dude.


proposing a new law, looking for a name...

"every 18 months the IQ required to destroy humanity goes down by 1"


Isn't that from Eliezer Yudkowsky?


Then it only proves their point ;)


Regressive Hitler?


Wolfram's thesis seems to hinge on the fact that current AI systems (for example GPT) have no volition of their own. It seems to be in mankind's interest that this remain the case - but there is likely a ceiling beyond which volition of some sort is required (scientific discovery for example). On the other hand, maybe humans can always provide that volition through some kind of prompting. Who knows?


All these GPT articles and comments are going to age so quickly. I finished writing my YA Sci-fi book only a month ago. I already accounted for ChatGPT which appeared during the week I was finishing the novel and put back its release a few months. I'm almost afraid to write the follow up novel as events are moving quicker than my imagination.


Ha, I feel you. "What would the world look in ten years?"

One question though, that keeps coming to my mind: right now big ML models use a lot of computing, that even organizations like Microsoft are having trouble to provision. How fast can our technology adapt? Where will it clash with environmentalist goals, with war, and with the economy?

And, fundamentally, what are our blind spots, our "computationally irreducible" surprises, the things that we of the Orange Collective Statistical Parrot are not talking about?


Clearly we're going to set the world on fire just to keep building bigger LLMs?


The LLMs will run out of input data.


"Yes, we burned the planet to a crisp. But for a few glorious months we made chatbots go insane!"


You can use ChatGPT to write the sequel—just remember that anyone else can do the same so it’s unclear whether it would be worth the bother…


The article is lengthy and has many novel points. It's a mix of staking the ground, advertising his work and legacy, and whispering to those that might be able to contribute. The gpt4 summary below and one I tried (after bing refused on copyright grounds) miss or invent many of the points.


The article is lengthy and has many novel angles. It's a mix of staking the ground, advertising for his work and legacy, and whispering to those that might be able to contribute. The gpt4 summary below and one I tried (after bing refused on copyright grounds) miss or invent many of the points.


I think it's a yes and a no thingy. During the dotcom bubble, we all thought we were going to lose our jobs, however, we later found a way to integrate ourselves into the system. We did the same when TV, camera, and cars were introduced.


We have been just hours away from a general purpose AI (and fusion power, and a large scale quantum computer and flying cars...) for about 20 years now. I only need a job for a few more decades. So I am not too worried.


It will take as many jobs as it will create. But ideally, it will create a compelling argument for things like basic income, improved access to healthcare, and so on - primarily to keep society growing and moving.


You think? I'm somewhat pessimistic on this take. If such shift is closed to happen, I think government will just ban AI since just the logistics of shifting the entire socioeconomical structure are far to great to have a trial/error with people's lives.


No ban will happen as we will have an AI "arms" race with every country in the world, particularly China.

AI will allow us to create a new category of automated devices that will allow people to do the job of many in every industry. And we will need a new crew of people to maintain them.


Cars didn't create new jobs for horses, it eliminated them.


Are we horses?


In the sense that we are an entity capable of performing certain tasks with a certain level of ability, yes.

https://www.youtube.com/watch?v=7Pq-S557XQU


Horses can only really perform (more or less) one function. Humans can do more, and adapt to do other functions. This is something that is as old as time, jobs eliminated with the advent of new discoveries and new ones created as a result.

When we came up with better cleaning tools and appliances, were cleaners as a result reduced or eliminated? No, it only expanded to places that couldn't afford them before. As a result, cleaning jobs were expanded and services that offer cleaning became more affordable.

Another example... instead of hiring web developers, which are too expensive and difficult to justify in a small business - we can now hire people that do multiple functions thanks to ChatGPT.

AI will allow us to build tools that can reach a greater audience and enable a greater level of accessibility than ever before. It will enable people with disabilities to do things that were not possible before.


I'll just become a contractor that does physical work: plumbing, construction, woodworking.

Maybe even farming and raising stock. I used to love those freshly laid eggs from that same day.


> I'll just become a contractor that does physical work: plumbing, construction, woodworking.

Enjoy competing with everyone else trying to do the same thing.


Well, my take is that ... it will not. Somehow, we human can blend into anything.


you mean like soylent green?


15,000 words




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: