Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah that's fair. A CCD sensor basically converts individual photons to electrical charges. What Tesla has said they've done is thrown away all the traditional image signal processing & post-processing which often includes a lot of exposure-related averaging.

You're right though that we don't typically use real-time neural networks that operate based upon spike rate, so an interval needs to be chosen for photon counting which could be considered a kind of exposure and it is critical that the interval be short enough to avoid saturation.



Lol this doesn't make any sense. The dynamic range of a fully sunlit California highway during noon in the summer (I.e. the brightness reading of the darket vs the brightest spot) is wayyyy higher than any existing sensor. You cannot ignore exposure, you have to choose which part of the scene you want within the brightness range that your camera sensor can capture. You will have areas of the scene that clip, in other words areas of the scene that are pure black or white with no data.

You can do bracketed exposures, but that's literally the opposite of ignoring exposure.


Just keep the duration low so that you never saturate the sensor even in bright sunlight and let the NN do the summations.

At a fundamental level it is somewhat akin to bracketing except all that HDR processing/frame matching is performed within the NN rather than a traditional image processing stack.

The NN is better at this anyway since it must already be performing camera/pose motion tracking to correlate what it's seeing from frame to frame.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: