Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LIDAR does not work in rain, on wet roads or in the snow.

Google car drives only in good weather in California for a reason. In Sweden they could drive maybe a week in a year.

    ... Sommaren är kort   
    de mesta regnar bort ..


Google currently has a req out for a electromechanical engineer to work on a new sensor system. I'm betting they're building a new sensor for their vehicle - rotating, but perhaps not LIDAR.

Cracking the heavy rain/snow problem would require more than just a new design though - you'd need a totally different sensor modality, a mix of sensors that can extract enough data, or a system that's far more robust to noise.


I realize the costs would be high, but would there be a way to embed some information in/near the roadway that would assist the onboard systems? I think there might be some advantages in making the road smarter in addition to the car.


There are advantages, but this poses a high bar for adoption. The current approach of keeping the entire suite of sensors on the car means that infrastructure won't need immediate upgrades to support autonomous vehicles.

I have no doubt that smarter roads are in our future, but they pose too high of a price for wide spread adoption of autonomous vehicles at the outset.


I think this is ultimately the best way forward, but I am excited to see other tech alternatives develop, too.

I really don't think it would be that expensive, at least along Interstates and major highways, to embed a trace wire to help guide self driving cars. (At least, during regular maintenance and new construction- tearing up roads to add it would obviously be expensive).


Even if you use a sensor fabric on the road way, as well as high precision/precise positioning GPS (sub-centimeter resolution) for lane keeping, you will still need a sensor that can build the environment around the car in a model software can process. Some sort of lidar/radar combination I believe.


There is "Rain Room" art installation that uses depth sensing IR cameras (not LIDAR) to stop the raining around peoples in the room:

http://youtu.be/EkvazIZx-F0?t=1m56s

Maybe it works only because they use multiple cameras and they don't need too high resolution.


the original DARPA Grand Challenge cars were using range of sensors, incl. ultrasound, stereo-cameras setup, and mm-wavelength radar by the team who had the money for it. LIDAR is just easiest to process (to get 3D scene representation of surroundings) and gets the highest bang/buck ratio in its range of applicability. You need to complement it with other sensors to get wider range of conditions covered.


Apparently LIDAR does work in rain albeit with degradation in it's effectiveness.

Regardless I wasn't aware of this before. Why would Google chose this technology ? It's bizarre.


There is nothing else that gets the accuracy Google needs. Sensors are currently unsolved problem. Multi wavelength radars don't yet have the accuracy and they don't work well with nonmetallic objects (There is limit of how much radar can use power (safely) and how expensive it can be. What works for F-22 don't work for Google)

Google is not planning to monetize this technology anytime soon, despite the hype.

The difference between crude human/animal intelligence and top notch AI-research is still huge. If people would need the accuracy that Google's car needs to move reliably and do split second decisions, we could never leave our house. We operate using just two cameras and accelerometers. The clear picture and spatial recognition is done using top notch heuristics in the unconscious. With self driving car it's the opposite. They need millions of very accurate distance measurements per second to drive. Driving like Google car does with cameras only is not happening yet.


I am with you, except for your last sentence, which is incorrect.

Mercedes Benz from Germany is doing active research in dynamic computer vision of driverless cars since the 1980s.

"1758 km trip in the fall of 1995 from Munich in Bavaria to Odense in Denmark to a project meeting and back. Both longitudinal and lateral guidance were performed autonomously by vision. On highways, the robot achieved speeds exceeding 175 km/h" ... "This is particularly impressive considering that the system used black-and-white video-cameras"

-- http://en.wikipedia.org/wiki/Ernst_Dickmanns

"In August 2013, Daimler R&D with Karlsruhe Institute of Technology/FZI, made a Mercedes-Benz S-class vehicle with close-to-production stereo cameras and radars drive completely autonomously for about 100 km from Mannheim to Pforzheim, Germany, following the historic Bertha Benz Memorial Route."

-- http://en.wikipedia.org/wiki/Autonomous_car#2010s


> They needs millions of very accurate distance measurements per second to drive.

I was with you up to there.


LIDAR Google uses takes more than a million measurements per second and has ~11 cm resolution.


Right, my camera takes more than a million measurements in 1/100th of a second and has a spatial resolution comparable to that or better depending on the distance.

'A million measurements' sounds really impressive but it does not have much to do with anything. What's a measurement? A single distance measurement in front of the car? Ok, at what opening angle, how many returns, how many pulses / second and so on.

As it stands that's just a 'big number' but those are not impressive at all without context.


>Right, my camera takes more than a million measurements in 1/100th of a second and has a spatial resolution comparable to that or better depending on the distance.

now you put second camera near-by and run stereo analysis algorithm to build 3D scene. 10+ years ago (DARPA Grand Challenge - where roots of Google self-driving car architecture comes from) with 1M cameras and the available hardware you'd get lucky to get 1 scene/sec and very crude one at that as 1M is much lower resolution than our eyes, and resolution is the key to stereovision. With LIDAR you just get 3D point for each measurement, no processing (beside regular filtering and coordinate transformation)

I wonder (haven't touched it myself for years nor checked the literature) what stereoprocessing one gets today on 10M-20M cameras on Intel CPUs of today plus GPU. It should be pretty close to what our eyes do, and what is most important - using several 20M cameras you can probably do _better_ than our eyes.


You cannot do better than our eyes. The dynamic range of eyes plus the bit depth are unparalleled in any camera. We're also backed by a very strong pattern matching algorithm.

That said, stereo runs pretty damn fast these days. On ASICs. TYZX, who was bought by Intel, sold a stereo camera about 3 years ago that ran ~52 fps with full point cloud returns. I think those were running 2+ Mpx.


>You cannot do better than our eyes. The dynamic range of eyes plus the bit depth are unparalleled in any camera.

this is one of the reasons why i said about several cameras - each camera, pair of them, can cover different [overlapping] subranges of light sensitivity and each do it better than eyes in each respective subrange, and thus the integrated image may be better than eyes'


360 degree 3d map of its surroundings.

http://i.imgur.com/hmSc9HJ.jpg


What's the sample rate on that?

(how many times per second is the same point revisited)


A car that only reliably self-drives in good weather conditions is still (a) quite a hard problem, (b) almost certainly a commercially viable product on its own (c) a pretty good start on a car that reliably self-drives in any weather.


I'm not sure about (b) since you'd need to buy two cars unless your boss will accept "but it was raining" as an excuse not to show up for work.


As someone who is blind I'd buy a car if it could drive me where I need to go 75% of the time. This assumes that it weather forcasts would be accurate enough to let me know the car could get me to the grocery store and dry cleaner any time during the current day rather than leaving me stranded at my destination unable to get back home.


A self-driving car hardly has to be only self-driving.


You're right of course, I feel silly now. It wouldn't be as cool, though. I was imagining something like a limo without the driver compartment.


Eventually for cities like Beijing to optimize congested transportation infrastructure, it would have to be.


Sure, but that's not going to happen until the sensors can work in the rain (and smog).


I have two use cases in mind:

(1) Delivery-bot. A car that drives itself to drop off a package and only delivers on non-rainy days. (if you need delivery on a rainy day, you pay extra for a person-driven delivery service. Otherwise the package waits at the warehouse)

(2) Transport option for people who can't or shouldn't drive themselves - too old, too blind, too young, or physically impaired. The self-driving car takes them where they need to go when weather permits, otherwise they have to call a cab or van service as a backup option.

(Much of California only has a couple weeks of rain per year.)


While in England, the delivery bots would work 2 weeks a year.


For Google, sensor technology is pretty much an implementation detail at this point. I imagine their interest is much more in vetting out the basic concept and understanding where the gaps are than it is on pushing toward getting a near- to mid-term product on the roads.

On the other hand, I imagine that auto manufacturers are much more interested in getting to viable product--whether it's improved assistive driving features (collision avoidance, speed matching, etc.) or, in the somewhat longer term, autonomy for some limited range of conditions. Hence, for example, Volvo's involvement of government as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: