Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LIDAR Google uses takes more than a million measurements per second and has ~11 cm resolution.


Right, my camera takes more than a million measurements in 1/100th of a second and has a spatial resolution comparable to that or better depending on the distance.

'A million measurements' sounds really impressive but it does not have much to do with anything. What's a measurement? A single distance measurement in front of the car? Ok, at what opening angle, how many returns, how many pulses / second and so on.

As it stands that's just a 'big number' but those are not impressive at all without context.


>Right, my camera takes more than a million measurements in 1/100th of a second and has a spatial resolution comparable to that or better depending on the distance.

now you put second camera near-by and run stereo analysis algorithm to build 3D scene. 10+ years ago (DARPA Grand Challenge - where roots of Google self-driving car architecture comes from) with 1M cameras and the available hardware you'd get lucky to get 1 scene/sec and very crude one at that as 1M is much lower resolution than our eyes, and resolution is the key to stereovision. With LIDAR you just get 3D point for each measurement, no processing (beside regular filtering and coordinate transformation)

I wonder (haven't touched it myself for years nor checked the literature) what stereoprocessing one gets today on 10M-20M cameras on Intel CPUs of today plus GPU. It should be pretty close to what our eyes do, and what is most important - using several 20M cameras you can probably do _better_ than our eyes.


You cannot do better than our eyes. The dynamic range of eyes plus the bit depth are unparalleled in any camera. We're also backed by a very strong pattern matching algorithm.

That said, stereo runs pretty damn fast these days. On ASICs. TYZX, who was bought by Intel, sold a stereo camera about 3 years ago that ran ~52 fps with full point cloud returns. I think those were running 2+ Mpx.


>You cannot do better than our eyes. The dynamic range of eyes plus the bit depth are unparalleled in any camera.

this is one of the reasons why i said about several cameras - each camera, pair of them, can cover different [overlapping] subranges of light sensitivity and each do it better than eyes in each respective subrange, and thus the integrated image may be better than eyes'


360 degree 3d map of its surroundings.

http://i.imgur.com/hmSc9HJ.jpg


What's the sample rate on that?

(how many times per second is the same point revisited)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: