All images created or edited with Gemini 2.5 Flash Image will include an invisible SynthID digital watermark, so they can be identified as AI-generated or edited.
Obviously I understand what is the purpose and the good intention, but I think sad to see that we are not not anymore responsible adults but big corps deciding for us what we can and what we cannot do. Snitching on your back.
I'm generally against "if you have not thing to fear you have nothing to hide" arguments but I'm curious what your argument is here for why it would be a problem that AI generated and edited images can be recognized as such.
Edit: I should probably say for full transparency that I am strongly FOR watermarks for AI imagery
My problem is more the general idea that nowadays the tech is hostile to the user. Before when you paid for something, it was fully yours to use in a good or in a bad way.
Imagine for example, that in the future and with improved tech, manufacturer of knifes were to embed a gps chip in all knifes sold because it might be used for dangerous things.
The worse in all of that being that the big tech does it based on their own "moral" compass and not based on a legal requirement.
Regarding the watermark, that is also applying to generated text in theory, imagine that you ask ai to refactor a job application letter or a letter to your landlord, and that Google will snitch you with watermark that you used AI for that.
Also, it's not really your image. Like if an artist puts a watermark on a commissioned piece it's not really a good argument that the artist is "snitching" by saying the art was done by them and not you trying to pass it off as your own...
I don't know if that's the argument you're trying to make, but I think it's worth considering
That's not correct. The watermark is robust to screenshots, file format changes (saving as jpeg/png) and at least light transformation (cropping, saturation level adjustment, etc).