Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really do think AI is going to replace millions of workers very quickly, but just not in the order that we used to think of. We will replace jobs that require creativity and talent before we will replace most manual factor workers, as hardware is significantly more difficult to scale up and invent than software.

At this point I have replaced a significant amount of creative workers with AI for personal usage, for example:

- I use desktop backgrounds generated by VAEs (VD-VAE)

- I use avatars generated by GANs (StyleGAN, BigGAN)

- I use and have fun with written content generated by transformers (GPT3)

- I listen to and enjoy music and audio generated by autoencoders (Jukebox, Magenta project, many others)

- I don't purchase stock images or commission artists for many previous things I would have when a GAN exists that already makes the class of image I want

All of this has happened in that last year or so for me, and I expect that within a few more years this will be the case for vastly more people and in a growing number of domains.



> - I use and have fun with written content generated by transformers (GPT3)

> - I listen to and enjoy music and audio generated by autoencoders (Jukebox, Magenta project, many others)

Really, you've "replaced" normal music and books with these? Somehow I doubt that.


Not entirely, no, I don't hope I implied that. I listen to human-created music every day. I just mean to say that I've also listened to AI-created music that I've enjoyed, so it's gone from being 0% of what I listen to to 5%, and presumably may increase much more later.


You should try Aiva (http://aiva.ai). At some point I was mostly listening to compositions I generated through that platform. Now I'm back to Spotify, but AI music is definitely on my radar.


Looks great, thanks for the suggestion


What are you talking about, this is my favorite album: https://www.youtube.com/watch?v=K0t6ecmMbjQ


Not to undermine this development, but so far, no surprise, AI depends on vast quantities of human-generated data. This leads us to a loop: if AI replaces human creativity, who will create novel content for new generation of AI? Will AI also learn to break through conventions, to shock and rewrite the rules of the game?

It’s like efficient market hypothesis: markets are efficient because arbitrage, which is highly profitable, makes them so. But if they are efficient, how can arbitrageurs afford to stay in business? In practice, we are stuck in a half-way house, where markets are very, but not perfectly, efficient.

I guess in practice, the pie for humans will keep on shrinking, but won’t disappear too soon. Same as horse maintenance industry, farming and manufacturing, domestic work etc. Humans are still needed there, just a lot less of them.


if AI replaces human creativity, who will create novel content for new generation of AI?

Vast majority of human generated content is not very novel or creative. I'm guessing less than 1% of professional human writers or composers create something original. Those people are not in any danger to be replaced by AI, and will probably be earning more money as a result of more value being placed on originality of content. Humans will strive (or be forced) to be more creative, because all non-original content creation will be automated. It's a win-win situation.


> how can arbitrageurs afford to stay in business

Most arbitrageurs cannot stay in the business, it's the law of diminishing returns. Economies of scale eventually prevent small individual players to profit from the market, only a few big-ass hedge funds can stay, because due to their investments they can get preference from exchanges (significantly lower / zero / negative fees, co-located hardware, etc.) which makes the operation reasonable to them. With enough money you can even build your own physical cables between exchanges to outperform the competitors in latency games. I'm a former arbitrageur, by the way :)

Same with AI-generated content. You would have to be absolutely brilliant to compete with AI. Only a few select individuals would be "allowed" to enter the market. Not even sure that it has something to do with the quality of the content, maybe it's more about prestige.

You see, there already are gazillions of decent human artists, but only a few of them are really popular. So the top-tier artists would probably remain human, because we need someone real to worship to. Their producers would surely use AI as a production tool, depicting it as a human work. But all the low-tier artists would be totally pushed out of the market. There will be simply no job for a session musician or a freelance designer.


> Will AI also learn to break through conventions, to shock and rewrite the rules of the game?

I think AlphaGo was a great in-domain example of this. I definitely see things I'd refer to colloquially as 'creativity' in this DALL-E post, but you can decide for yourself, but that still isn't claiming it matches what some humans can do.


True, but AlphaGo exists in a world where everything is absolute. There are new ways of playing Go, but the same rules.

If I train an AI on classical paintings, can it ever invent Impressionism, Cubism, Surrealism? Can it do irony? Can it come up with something altogether new? Can it do meta? “AlphaPaint, a recursive self-portrait”?

Maybe. I’m just not sure we have seen anything in this dimension yet.


>If I train an AI on classical paintings, can it ever invent Impressionism, Cubism, Surrealism?

I see your point, but it's an unfair comparison: if you put a human in a room and never showed them anything except classical paintings, it's unlikely they would quickly invent cubism either. The humans that invented new art styles had seen so many things throughout their life that they had a lot of data to go off of. Regardless, I think we can do enough neural style transfer already to invent new styles of art though.


I believe that AI will accelerate creativity. This will have a side effect of devaluing some people's work (like you mentioned), but it will also increase the value of some types of art and, more importantly, make it possible to do things that were impossible before, or allow for small teams and individuals to produce content that were prohibitively expensive.


There still needs to be some sort of human curation, lest bad/rogue output risks sinking the entire AI-generated industry. (in the case of DALL-E, OpenAI's new CLIP system is intended to mitigate the need for cherry-picking, although from the final demo it's still qualitative)

The demo inputs here for DALL-E are curated and utilize a few GPT-3 prompt engineering tricks. I suspect that for typical unoptimized human requests, DALL-E will go off the rails.


Personally speaking I don't want curation. What is fascinating about generative AI is the failure modes.

I want the stuff that no human being could have made - not the things that could pass for genuine works by real people.


Failure modes are fun when they get 80-90% of the way there and hit the uncanny valley.

Unfortunately many generations fail to hit that.


Yes, but there's no reason we can't partially solve this by throwing more data at the models, since we have vast amounts of data we can use for that (ratings, reviews, comments, etc), and we can always generate more en masse whenever we need it.


This isn't a problem that can be solved with more data. It's a function of model architecture, and as OpenAI has demonstrated, larger models generally perform better even if normal people can't run them on consumer hardware.

But there is still a lot of room for more clever architectures to get around that limitation. (e.g. Shortformer)


I think it's both - we have a lot of architectural improvements that we can try now and in the future, but I don't see why you can't take the output of generative art models, have humans rate them, and then use those ratings to improve the model such that its future art is likely to get a higher rating.


> We will replace jobs that require creativity

Frankly, I think the "AI will replace jobs that require X" angle of automation is borderline apocalyptic conspiracy porn. It's always phrased as if the automation simply stops at making certain jobs redundant. It's never phrased as if the automation lowers the bar to entry from X to Y for /everyone/, which floods the market with crap and makes people crave the good stuff made by the top 20%. Why isn't it considered as likely that this kind of technology will simply make the best 20% of creators exponentially more creatively prolific in quantity and quality?


> Why isn't it considered as likely that this kind of technology will simply make the best 20% of creators exponentially more creatively prolific in quantity and quality?

I think that's well within the space of reasonable conclusions. For as much as we are getting good at generating content/art, we are also therefore getting good at assisting humans at generating it, so it's possible that pathway ends up becoming much more common.


Isn't training data effectively a form of sampling?

Couldn't any creator of images that a model was trained on sue for copyright infringement?

Or do great artists really just steal (just at a massive scale)?


Currently that is not the case:

>Mod­els in gen­eral are gen­er­ally con­sid­ered “trans­for­ma­tive works” and the copy­right own­ers of what­ever data the model was trained on have no copy­right on the mod­el. (The fact that the datasets or in­puts are copy­righted is ir­rel­e­vant, as train­ing on them is uni­ver­sally con­sid­ered fair use and trans­for­ma­tive, sim­i­lar to artists or search en­gi­nes; see the fur­ther read­ing.) The model is copy­righted to whomever cre­ated it.

Source (scroll up slightly past where it takes you): https://www.gwern.net/Faces#copyright


Thank you, this is the part I find most relevant:

"Models in general are generally considered “transformative works” and the copyright owners of whatever data the model was trained on have no copyright on the model. (The fact that the datasets or inputs are copyrighted is irrelevant, as training on them is universally considered fair use and transformative, similar to artists or search engines; see the further reading.) The model is copyrighted to whomever created it. Hence, Nvidia has copyright on the models it created but I have copyright under the models I trained (which I release under CC-0)."


But does that still hold when the model memorized a chunk of the training data? Or can a network plagiarize output while being a transformative work itself?


I bet they can claim copyright up to the gradients generated on their media, but in the end the gradients get summed up, so their contribution is lost in the cocktail.

If I write a copyrighted text on a book, then I print a million other texts on top of it, in both white an black, mixing it all up to be like white noise, would the original authors have a claim?


Models can unpredictably memorize sensitive input data, so there can be a real copyright issue here, I think.

https://arxiv.org/abs/1802.08232

Worse, sometimes the input data is illegal to distribute for other reasons than copyright.


Those don’t seem in any way similar to like writing a tv show or animating a Pixar movie.


I agree, and due to the amount of compute that is required for those types of works I think those are still quite awhile away.

But the profession for creative individuals consists of much more than highly-paid well-credentialed individuals working at well-known US corporations. There are millions of artists that just do quick illustrations, logos, sketches, and so on, on a variety of services, and they will be replaced far before Pixar is.


I think this is actually not a bad thing.

I won't say many of those things are creativity driven. There are more like auto assets generation.

One use case of such model would be in gaming industry, to generate large amount of assets quickly. This process along takes years, and more and more expensive as gamers are demanding higher and higher resolution.

AI can make this process much more tenable, bring down the overall cost.


You are probably right. Still, there is hope that this just a prelude to getting closer to a Transmetropolitan box ( assuming we can ever figure out how to make AI box that can make physical items based purely on information given by the user ).


Do you think investing in MSFT/GOOGL is the best way to profit off this revolution?


It's too hard to say I think. Big players will definitely benefit a lot, so it probably isn't a bad idea, but if you could find the right startups or funds, you might be able to get significantly more of a return.


What GANs do you use to generate stock images?

Do you have a GPT-3 key?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: