Wait some more time and photos or even video recordings will be deemed just as dangerous. And then what? Even if there is real evidence, it will have to be discarded unless it can't be sufficiently validated. It will get very hard to prove anything.
Or perhaps camera manufactures will start putting traces in. Anyone who makes surveillance systems (like stores use) should put some end to end things in so they can say "this camera took that recording and we can see that it wasn't tampered with by...". (if anyone works for a surveillance company please run with this!) With encryption we can verify a lot of things, but it sets the bar higher than someone took a picture.
Of course in a darkroom someone skilled could always make a fake photo - but the bar is a lot lower with AI.
Cryptography doesn't really fix it. There are a zillion camera makers and all it takes is one of them to have poor security and leak the keys. Then the forger uses any of the cameras with extractable keys to sign their forgery.
Or they just point a camera at a high resolution playback of a forged video.
This also assumes you can trust all the camera makers, because by doing this you're implicitly giving them each authorization to produce forged videos. Recall that many of these companies are based in China.
The point is the camera maker certifies in court under perjury penalty that their cameras are not compromised and that is their image. "Other camera systems are compromised, but ours is not...".
Here the camera manufacture is asserting under oath that they (not their competitors) are secure. Just because IoT is almost always insecure doesn't mean it 100% is, being the exception can allow you to charge a premium price at times - but you better really be the exception.
This isn't how security works. Devices like IoT stay in the field for decades quite often, and generally without updates (or if they do have updates, the updater itself is a security risk).
I think the best proof of authenticity is already there, in the quirks each particular camera model has in the recording encoding. It's likely that a forgery would easily be shown to have differences in encoding when compared with a video file from the same camera. It would be more difficult with an audio recorder, as you could just record the AI voice in the room. An audio expert could probably show that the acoustics of the fakery don't match where it was claimed to be recorded.
I wonder if film photography (slide film in particular) will become viable again and that photojournalists will once again become necessary for society.
Rather than cryptography which could be difficult to grok for a non-technical jury, a physical slide of film would be the source of truth.
It can still be faked by photographing a manufactured/generated scene though.