Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Lidar Company Velodyne Debuts $100 Auto Safety Sensor (forbes.com/sites/alanohnsman)
103 points by finphil on Jan 8, 2020 | hide | past | favorite | 81 comments


So Lidar is getting cheaper, but the advantage of Lidar (Vs. optical) may be diminishing.

Lidar worked really well because it was computationally impractical to process visual images in near real-time onboard. Lidar simplifies the informational inflow, which reduces computational cost. But the computational cost and practicality of processing optical data onboard has changed radically, it is now practical and affordable (both due to substantially improved software and new/cheaper hardware).

Keep in mind humans use "optical sensors with parallax" (i.e. our eyeballs). Cars are already optimised around that assumption (e.g. headlines to improve visibility in the visible light spectrum). Lidar still has advantages, but also a bunch of drawbacks (like dispersion and sunlight disruption), optical sensors do too (e.g. glare) but they're a lot more intuitive for humans because we share them.

I guess what I am saying is: Are you betting on better sensor tech (Lidar) or better computational tech (Optical)? I think Lidar will hit a ceiling after the "easy wins" have been consumed (and we're approaching that point), whereas optical has no real ceiling (even over and above human's innate abilities). With optical you can understand the world as humans visibly see it, Lidar sees the world fundamentally differently, seeing both less (bad) and more (good).

Obviously it is a somewhat false choice, but if people are investing dollars into development of both techs it is a choice that matters.


If we want to be safer, then we need to blend sensors.

We do this as humans. We use our ears for sound and inertial detection.

Lidar, Radar, colour and near infrared optical all have a place in autonomous driving.

I dont think we are really anywhere near optical working reliably as a single source. I am immediately suspicious of anyone who suggests otherwise, because they either believe to much in the state of AI, or haven't thought enough about life critical systems to be let near anything autonomous. Worse still, they may be out to make a quick buck.

The thing that lidar has is speed and accuracy. 100hz update rates, and 1/100th second latency is not unthinkable. With optical systems you're lucky to get 25hz with sub 100ms latency. Thats just for depth estimation using semantics, you still need to feed it into your driving model.

Lidar works way better at night, it works a tonne better with unknown objects. However, like optical its shit in rain/snow. Hence why decent resolution radar will be needed as well.


Radar is pretty garbage in rain and snow as well.

https://www.weather.gov/media/publications/front/14dec-front...

My father was a missile radar guidance technician for a living. He says radar on a car makes almost no sense, especially for navigating in inclement weather, due to how radar tends to reflect off of anything to some degree.


Radar works fine in rain and snow. You can use it to detect rain and snow, but the reflection is very weak compared to the reflection from a solid object, especially at the 0-500m range that these operate at. Big weather radars go out to a couple of hundred miles.


For a vehicle, it will be far less than ideal. It will have problems distinguishing between heavy rain and a solid object, especially at night, without a lot of other high-precision high-refresh sensor data and a good computer to put it all together in realtime.

Every sensor, man-made or natural, sucks at detection when it comes to operating in heavy inclement weather. That's why we need tons of processing power and filtering algorithms. If they were any good, we'd not need those higher-order algos and filters. The sensors could simply generate reliable data.


This is not true.

Automotive radar are in the ~80ghz range, giving it a wavelength on the order of about 4mm. Car radar can definitely see through rain and snow. All radars receive a range of reflections from many objects, rain included, but the return off of rain is nothing compared to the return off of a solid object. The filtering then becomes choosing the peak amplitudes of the radar return signal. Rain will manifest itself as a smearing of the signal and lowering the peak a little bit, but usually much less than a solid object.

Optical and lidar work miserably through rain/fog, with wavelengths in the ~1000nm range, but have much higher resolution.

Any real self-driving solution in the near to medium future is going to involve sensor fusion of as many sensors the car manufacturers can afford and cram in. This means cameras, ultrasound, lidar, radar, etc. Anybody telling you that there is only one true sensor to rule them all is either selling you something or ignorant of the state of the industry


Is this based on data you've seen or worked on? In general radar performs a lot better than lidar because the wavelength of automotive radar is reasonably large relative to the size of a raindrop.


When I'll drive down a motorway I doubt that I use sound or inertial for planing my further track. I presume that one could drive 'remote' with only visual, if it would be the same quality as when one was in the vehicle.


>If we want to be safer, then we need to blend sensors.

isnt that what a kalman filter does? I thought most autonomous cars have sensor fusion to blend all sensors


A Kalman filter solves the specific problem of estimating a finite-dimensional state vector from uncertain measurements, where the measurement errors and their correlations follow (or can be approximated as) a multivariate Gaussian.

That works quite well for, say, combining GPS (which has short-term noise) with inertial/odometry measurements (which suffer from long-term drift) to determine your vehicle's position, orientation, and velocity in 3D space (expressible as a 9-dimensional state vector). But it's not directly applicable to problems like combining map data with LIDAR and vision to generate a representation of your surroundings.


That's informative, thanks. I was under the impression that autonomous cars already combine all their sensor readings


“Sensor fusion” isn’t just a thing, it’s an entire area of study. Kalman filtering is one technique in the umbrella of sensor fusion. There is a lot of room for improvement here.


"Lichess is for professionals. Plenty of GMs ims and masters play on it. Even Magnus plays the tournaments. This is like saying Linux is not enough for computer professionals."

This is a misunderstanding. Lichess is used by chess professionals for playing online chess against each other (which is mostly recreation and not serious chess preparation). Software like ChessBase is used for game preparation.


Humans mostly don't use their ears when driving cars (to the frustration of the rest of us using the road).


you should try using some noise cancelling headphones and listening to music while driving. The result is really surprising and it is way more difficult to drive.

Humans are actually using their ears a LOT while driving. You don't realize it because the brain is so powerful at blending everything into a single "feel".


You're listening to your own engine yes, but a car control system presumably has more direct access to that information. You simply can't hear sound from outside - and car makers advertise this as a feature! - to say nothing of the fact that many or most drivers will be playing music by one or another means.


You can hear plenty of sound from the outside, most of it in the lower frequency bands and sometimes only via bone transmission, but it's there. Get a pair of active noise-cancelling headphones and try it for yourself, especially in more complicated driving situations.

While the brain can still operate a car with the loss of some input, visual input isn't really enough and is heavily supported by all other senses (acceleration and gravity, sound, touch and vibration, temperature and kinesthetic). Unlike modern computers, the brain is massively parallel and asynchronous, so it can simply use all of the inputs to find the correct course of action, even going as far as considering multiple lines of thoughts (and then your memory is manipulated so you only experience the chosen line of thought, similar to your vision being manipulated when you blink).


You are also blending sounds from other cars, as well as sounds from your tires on the road which gives you some feedback on the driving conditions. All of those match patterns that you have seen before and give you a better feeling about your car status.

The brain is using hundreds of different captors and is order of magnitude more powerful than any computer today. Attempting to drive with a couple cameras as only sensors is a recipe for disaster as has already been proved (see also: Tesla crashes)


Same here, tried it with earbuds once and it was very unnerving. Took them out after a minute.


I suppose it may be regional, but it is illegal to drive with headphones in both ears in California (got pulled over for it). And I know that in India and the Vietnam they honk a lot not a as a way to communicate.


Terrifying to see fellow cyclists with headphones on - I don’t know how they feel safe doing it.


> ... optical sensors do too (e.g. glare) but they're a lot more intuitive for humans because we share them

That's an important point. The long term safer option will be to use both types of sensors before together, that can provide a more holistic picture of the scene around the car (much like we use radar for adaptive cruise control which would be much harder with just cameras ... I think).

I hope we keep investing in making all these sensors cheaper and good enough for 30k cars to start packing them


Jumping on the 'agree' bandwagon. >The long term safer option will be to use both types of sensors before together

This 100% is the real target but I'd like to add to it.

Camera-based systems rely on ambient photons arranged in a pre-determinded configuration hitting a sensor. Lidar-based systems rely on reflected photons hitting a sensor.

What I would prefer to see is a LiDAR/RADAR combination. When both systems rely on "photons" you suffer the same inherited weaknesses of both. i.e. fog

But split it up and you can be SAFER.


>What I would prefer to see is a LiDAR/RADAR combination. When both systems rely on "photons" you suffer the same inherited weaknesses of both. i.e. fog

Fog isn't the only problem with relying on ambient photos. "Nighttime" is a really big problem here.

I really don't get this idea that optical sensors and processing are the solution for autonomous navigation. I propose this simple test for the pro-camera people: get in your car at 1AM, drive to a very rural place with a windy road, then turn off your headlights, and try driving on that road at 55mph. If you survive, come back and tell us how safe you think that exercise was.

LiDAR doesn't have this problem, because it generates its own light source (a laser), and doesn't need overly-bright headlights to turn darkness into daytime, with all the problems that's now causing (light pollution, vision problems in people who drive at night from all the glare, etc.).


Why are we turning off our headlights in this exercise?


To illustrate the problems with optical sensors. What happens when the headlights fail, or if one of them fails, or if a bunch of snow or ice builds up in front of them?

I also addressed some of the problems with relying on headlights at the end of my comment: to get better visibility, you need more light. This doesn't come without costs to the environment and to other drivers (like those who don't have driverless cars yet); in fact, too-bright lights are making it unsafe to drive at night now.

As an aside, I think it should be illegal for anyone to have a car with non-halogen headlights without automatic high-beam dimming. Xenon and LED lights are great for illuminating dark roadways, but they're horrible when they on the car coming towards you, and that driver is too stupid or careless to dim their high beams. Temporary blindness is the result of this.


> What happens when the headlights fail, or if one of them fails, or if a bunch of snow or ice builds up in front of them?

What's expected of a normal driver when the headlights fail? Turning on the hazard blinkers, slowing down carefully, and finding their way to the shoulder?

There's also a reason there are two headlights.


Ah yes, I can't wait for my future where my car refuses to drive me because one of the headlight sensors shows it's failed when it's working fine and the other is fine.


If the cameras only see darkness, which would be a giveaway that the headlights are not working, I would prefer the car not drive itself


Correct, the visibility is the thing you'd want to look out for because it applies to a wide range of issues, including a camera with a blocked lens (either intentionally or by something like a wet leaf).


This is pedantic, but RADAR relies on photons as well - radio waves are just a different frequency of EM radiation vs visible light. A RADAR system emits photons that are reflected and hit a sensor like LIDAR does (or like a camera with a flash does). I think your point that having more diverse sensors is safer stands though.


Lidar worked well (and continues to do so) because it’s the most accurate proximity sensor today for cars. If you don’t want your car to hit anything, you need Lidar. Radar and vision-based stereo are useful too, but 10%+ distance error and overall false negatives make that combo pretty unsafe (especially off the freeway). Computational cost is a far smaller factor than safety in the value of a demo.

Cheap lidar (and probably radar) will earn a permanent place on ADAS of the future if only because it works much better than vision at night. You’ll see them together because they have complimentary failure modes and the denser lidar point cloud is worth a $100 add-on.

While many of the autonomy efforts are still struggling with basic detection, the major challenge facing the leaders is in forecasting future motion, especially complex motion. The new cheap sensors are a boon but not a game changer.


we use optical sensors with parallax, yes, but they also have 6-dof motion capabilities. Cameras on a car are stuck in one place. Take some time and notice how often your head moves while driving. Even just the small motion of our eyes (rotation during fixation and tracking) adds spatial information to the visual stream that cameras simply can't achieve.

A lot of the parallax information we infer is not because we have two eyes, but because we're able to move our eyes/head around, and "fill in the gaps" in a bayesian sort of way.


> Cameras on a car are stuck in one place. Take some time and notice how often your head moves while driving.

That's why multiple cameras are used and positioned farther apart than you could move your head while inside the car.


Those cameras are to provide a 360 view.

There aren't enough of them to do a stereoscope view in any direction.


I have only one eye, yet I've had no trouble obtaining a license and driving. Except in degenerate activities like ping-pong, "stereoscopic" viewing is not necessary.


Even with only one eye the brain is fairly capable of creating a stereoscopic image in mental space, via various cheats and tricks to get a parallax or by simply making a good guess from experience.


Don't I know it! b^) Except, you know, in ping-pong and other odd situations... I really only meant to challenge the idea that lots of cameras are required. With enough processing power (and, I guess, PTZ) one camera is plenty.


I wondered about this. There's a lot of conversation around the internet about Tesla cars having lots of cameras. While true, I think they need several more at least. Enough to provide stereoscopic vision in every direction. (Plus a few pointing downward, because it's embarrassing for an otherwise reasonably well featured car to not have a 360 parking view in 2020.)


there's other ways of getting a stereoscopic view.

comparing multiple images from the same camera at closeish points in time while in motion for instance.


If true, that's easy to fix.


Cars can be driven remotely using only cameras, so the motion is likely not very critical. https://youtu.be/9sgetWQGYxY


Except when stopped, car visual systems can do parallax just like pigeons do (by moving the camera forwards).


I have serious doubts visual processing under all conditions could ever be better than a combination of both sensors.


It's not just the computational cost of real-time visual inference, there are some fundamental hurdles to be overcome we've attained truly human level computer vision.

It's hard to imagine a pure vision system ever really matching the performance of a fused sensor system.

Why infer when you can measure?


That was an interesting observation, that lidar might actually be unnecessary by use of visual image processing. I always thought Tesla made a terrible choice in precluding (expensive) lidar and said if humans can drive safely without lidar, just using visual processing, so should cars (eventually). Surprising but I guess it may turn out that way.


Maybe lidar could be great for training optical?


This will only augment current optical technology, not replace it.


Trying to emulate human eyesight in self driving cars is hilariously misguided.

1. We have a computer behind our eyes so advanced that we may never be able to come close to replicating it. It is capable of identifying, tracking and predicting the behaviour of multiple objects in real-time even in reduced visibility and can infer new objects without training e.g. a green firetruck or a RV with a satellite dish.

2. Our eyes are connected to a very adaptive and movable object i.e. our head. In order to perceive depth and identity objects e.g. an actual person versus a photo of a person we continuously move our head around in multiple dimensions. A car can't do this.

I would refer everyone to the countless examples of Tesla's Autopilot recognising humans in bus signs, sides of trucks etc and attempting to do auto-avoidance. That is an unsolvable problem with only optical cameras.


> continuously move our head around in multiple dimensions. A car can't do this

Doesn't isn't can't. There's no reason why car-mounted hardware can't move just as much as eyes do.


Metal fatigue (and other wear and tear) might be a reason; our muscles are self healing and regenerating. It would likely be cheaper and less error prone to simply have more sensors, not moving at all or with much narrower ranges of motion to simulate depth perception. Training a computer to understand a simulacrum is an entirely different challenge, I think.


Of course cars can move their dozen or so cameras hundreds of times a second. But good luck getting a DNN to process all of that as well as the system handling tasks like identifying traffic lights, pedestrians.

Head of Tesla AI has already stated that even with the new Nvidia hardware they struggle to meet the computational requirements.


Those are two largely unrelated problems. Reconstructing geometry from multiple views is a separate (very well understood) problem from analysis to understand what the shapes mean.


They are completely related when you understand that there is a fixed computational budget in which to operate.

Not to mention for Tesla having to abandon all of their training data.


Where has Tesla ever said they needed to abandon all of their training data?


Head movement is a useful trick but it's a pretty minor improvement. Just having a few cameras with good separation will beat it.


I'm not a big hardware nerd, how do this and that $16,000 one from earlier this week compare? They're both roof top, spinning sensors it seems but at very different price points. Is this one not as far distance as the other?


LIDAR units have different effective ranges and, most importantly, number of points they capture. These cheap units generally have a small number of points. Looks like the Velabit has a range of about 100 meters, but FOV is 60 degrees horizontal by 10 degrees vertical, so you would need to mount several of these around a vehicle to get the equivalent coverage of a 360 degree unit. The press releases don't say how many points are in its point cloud, but I'm guessing it's probably 1/10th of a larger unit's.


To add to above comment: comparison of some different models for Ouster (a competitor):

https://ouster.com/blog/128-channel-lidar-sensors-long-range...

Discussion about above:

https://news.ycombinator.com/item?id=21970796


Better link: https://velodynelidar.com/press-release/velodyne-lidar-intro... states "60-degree horizontal FoV x 10-degree vertical FoV" I wonder whats the resoltion though. Judging by the name "velabit" it could be low.


It wouldn't have to be too much to overcome the resolution of current front-facing radar and ultrasonic sensors, which are 1x1 resolution ;)


Very cool to see this price point. If Velodyne really can manufacture and sell it in volume and be profitable it's an amazing evolution for Velodyne.

If it's a loss leader to buy attention and stay relevant versus their fast moving competitors it will become obvious eventually.


It'd be pretty cool to use this as part of an assistance tool for people with visual impairments. Convert the positional data to something you can learn to parse with your ears, and people could get a better sense of obstacles around them. I'm really not sure what the right conversion strategy or user interface is, but it seems like the sort of thing that'd be relatively straightforward.


Microsoft has something similar: https://www.microsoft.com/en-us/ai/seeing-ai


It's really nice to see LIDAR getting cheap I've wanted to play around with them for ages but the buy-in for anything other than cheap 2d units has always been way too high for me to justify. I do wish there were at least a few specs in this though.


Yes, there's nothing about the resolution. You can get single-beam scanners for about that price now. If it's only a few beams, it's not a big deal. You can get a small number of beams automotive LIDAR from Continental right now.

If you took a single-beam LIDAR and reflected it off a reflective prism where each face is at a slightly different angle, you'd have a somewhat slow multi-beam scanner with only one emitter and detector. I looked into doing that for the DARPA Grand Challenge, We were thinking of a mod to a rotating scanner used in laser printers to scan the page. There are optical companies that can cut and polish custom prisms. We were too small a team to build our own scanner, though.


Another way to do it is to mount a 2D LIDAR to scan in the vertical plane, and then pan that around. A 180 degree pan will net you a dome/spherical volume (although in a slow manner).

You could do this easily with cheap parallax sensors (like you find on Neato vacuum cleaners, and which are now sold as separate units - RPLidar and such). This has also been done using SICK coffeepot-style 2D LIDAR units (such rigs tend to be large and heavy, tho).

Something I've often considered is the idea of a "stochastic" scanner - that is, don't worry about strict angles, just let the sensor scan at random angles, and note the angle and reading. Over time you'd build up a complete scan. Just an idea I've rolled around in my head; the idea was to eliminate the need for syncing and timing of the scan hardware (2D or 3D), at the price of not necessarily getting a complete perfect scan all the time. It was something of a thought experiment I had while thinking up ways to DIY a LIDAR sensor in a very cheap manner, beyond what has already been done.


CSIRO did something like this with their Zebedee system[0], basically a Hokuyo 2D spinning lidar on a spring on a handheld stick. I love this from a design perspective, because it's very usage focused. You already have a human walking around site scanning, and a spring like this gives you the pitch and roll for "free", compared to panning it using an (expensive/heavy/clunky) electromechanical mechanism.

[0]: https://www.youtube.com/watch?v=DUEAz_naHHg


I knew I'd seen one that did exactly like that but could not find a video of it!


This has also been done using SICK coffeepot-style 2D LIDAR units (such rigs tend to be large and heavy, tho).

Yes. Had one of those once. Worse, we had the weatherproof SICK unit, which is even bigger. We prepared for the DARPA Grand Challenge expecting much more off-road. Way too much skid plate stuff, armored cables inside mesh washing machine hoses, sensor cleaning with washer fluid and compressed error, stuff like that. Totally unnecessary, as it turned out.

Something I've often considered is the idea of a "stochastic" scanner

Someone sells one of those. There are two unsynchronized scan axes. More for stationary than for moving applications.


Do you happen to know if there's a LIDAR system (I'm aware that many of them just use LEDs) that operates at UV frequencies near the transition to visible?

I'm looking for a LIDAR that can provide basic terrain and obstacle mapping underwater.


Yeah I notice that they are not showing a sample pointcloud.


Its solid-state. Old Lidar needed an expensive revolving sensor


Would be really cool to retro-fit these on an older vehicle. Replace the center console with a tablet, some OS handles the lidar.


how does one buy this?


From Velodyne, presumably, and their website says it will be available mid-2020


My 2021 prediction: And the Chinese already got the specs and should start selling $5 units in a year.


You beat me to it, but I was going for $50 ;)


It will be $5 in China and $50 in the US.


$79 for IoT enabled version ;)


And a $0.02 version for Africa, where an export controlled weakened processing HW/SW combo is used instead of regular one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: