Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Survivor bias. In that, you're reacting only to the images you assume are AI. It could be you're really good at spotting them, or they're really bad. But it could also be you spot a tiny proportion, or even misidentify real images as AI. Without knowing the real rate, it tells us nothing about whether picking AI images over stock images is a good tradeoff or not.


As someone who purchased stock images via our content team there were a ton of really schlocky stock images 10+ years ago and probably longer that I might be inclined to dismiss as AI-generated today.


Oh, please. I've generated many, many images. They are not hard to spot.


You've already indicated elsewhere that in a test of images that had not been edited, or selected to minimize the risk of detection, you as someone who has spent lots of time generating AI images got 2 out of 20 wrong. So clearly it's possible to fool you.

How many more do you think would get past you if the person running the hypothetical campaign was someone with a similar experience at picking images to you spending the same amount of time they would picking stock photography on ruling out any picture that looks like it's AI-generated to them, or editing them to remove things that'd tip you off?


The bad ones are of course not hard to spot. The good ones you'll never notice.


Good ones /of what/?

Are we talking a human subject? Nature?


Much respect, but nowadays, unless the person put basically zero effort to make it look realistic, there's no way you can detect whether an image is AI or not while quick scrolling. Obviously, if you look at every image as "let me examine every part of it to see if it's AI or not" mindset, you can still spot them. But anyone who spent a few days playing with the latest gen models, can create images that pass the 90% of sniff tests.


Do you have a test you like? I just took one at https://sightengine.com/ai-or-not?version=2024Q3 and got 18/20 correct, and I'm not zooming in on details or anything, I'm just using some basic discrimination based on what I've generated and seen generated in the past.

I would do even better at this if we limited it to pictures of "realistic" settings.


I think we might be talking about two distinct cases. If you're actively thinking whether an image is AI or not, you're already biased to it potentially being AI-generated. That improves your recognition of slop-finding. As I mentioned, I definitely agree how it's fairly straightforward to spot the slop if you're looking for it.

I'm not even sure how we could implement a real-life test without bias. Maybe if there was a complete feed of your internet browsing, where it asks you at the end of the day "ballpark the % of media that you think was AI?". Then go through the entire feed, and scrutinize it one by one.


Right, and even there I think we might need to get specific about categories of images. Images that are supposed to be photo realistic are far easier to spot than "battleship in outer space" generations.

Bringing it back to the topic of stock photography: A large percentage of stock photos are of real things, people, scenery. So, when someone says I'll have a hard time spotting generated stock photos, I kinda go uhh, well, no, not generally, because stock photos are very often of people and real life scenes, the thing that is the easiest to spot as a generation.


Has anyone said you will have a hard time spotting them? Because I did not. I pointed out that when you say you can, it is an instance of survivor bias, and it is, whether you are good or bad at it as long as we don't have data to tell whether your assumptions were correct.

We still don't know whether or not you're good or bad at picking out AI images used in actual campaigns, because we have every reason to assume at least a reasonable proportion of AI images used in actual ads will have been through an editorial process that'd rule out a lot of the easily recognized shlock, and so a test that does not use images that have been through the same selection process is meaningless.

I have no doubt you can recognize some. You may well be able to recognize all of them perfectly for what I know. The point was not to argue you can't, but that your impression can't reliably tell you, because you'd be likely to think the same whether your accuracy is high or low.


I’m not entirely sure why you’re discrediting the advancement of realism. I’m very sorry, but I have a hard to believe that when you scroll through IG and see something like this — https://www.reddit.com/r/ChatGPT/comments/1hvdhie/this_girl_... , you’ll think it’s AI instantly. Unless, again, you’re consciously examining whether every single piece of media is AI generated or not.


A couple of things:

1. This idea that "you can't tell if you mindlessly scroll past it" isn't a very good measurement.

2. Given IG is slowly filling up with AI slop, I actually do spend a decent amount of time going "is this AI-generated".

3. I'm not discrediting "advancement of realism" in AI at all. I'm just saying it's much, much easier to detect AI when a generation is supposed to be of something real.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: