I've started intentionally making my answers subtly wrong. E.g., if something might look like a fire hydrant, but isn't, I mark it positive. I usually have to do it a few times anyway, and it makes me feel better to think Google's AI datasets are inaccurate.
Hah nice, I thought I was the only one who did this!
I've noticed that in the more extreme edge cases, it lets me through anyway. Maybe other people aren't paying enough attention to notice that there's actually a difficult-to-see street sign in that particular square.
Sometimes I feel bad that one day, a Waymo car is going to miss a stop sign because of me. But then, I also resent being used as a free mechanical turk, and they should know better than to rely on random people from the internet to build safety-critical systems.
In my experience, when I do it slightly wrong it actually takes less steps to get through. I guess in the age of Yolo v4 and such, doing it “too well” actually makes you look like a robot?
I've noticed this too. If you do the captchas too quickly you get more of them as well. If I 'dumb' myself down a little I usually only just get one of them
Google knows 2 out of the 3. The same with text captchas, they knew 1 of the 2. It assumes if you got one right, you got the others too. So they key is guessing which one is unknown, picking the correct ones, and picking a random one for the last
I never understood the mindset that finds joy in throwing off the datasets being trained on these kinds of captchas. Is it related more to the "rush" of cheating on a test in school (i.e. figuring out a way to "cheat" on the captcha), or rooted more in rebellion against Google/whoever?