Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Their "autopilot" is basically a "driver assist" system: lane-keeping plus adaptive cruise control. Mercedes, BMW, Cadillac, Volkswagen, Ford, etc. already have that. Tesla's is rumored to be the Daimler-Benz system.

That's good enough for about 99% of freeway driving. The last 1% is a problem, which is why none of the big car companies call it automatic driving. Most of them put in systems to insure the driver keeps paying attention, such as insisting on hands on the wheel in auto mode.

We're approaching the "deadly valley" - automatic driving that's almost good enough that the driver can stop paying attention. On the far side of the "deadly valley" is full-auto driving, including automatic handling of unusual and emergency situations, which is where Google and CMU/Cadillac are headed.

The minimum safe level is probably a system that can get the vehicle stopped autonomously when it's headed into a situation it can't handle. Beeping the driver to take over is not going to work in practice. As soon as hands-off driving is available, people will use it when tired, drunk, or texting.



The interesting thing is that, based on my experience with the cruise control/driver assist on the 5 Series, it's far better at responding quickly to changing road situations, like when somebody in front of you just jumps hard on the breaks. The car will take it all the way down to a stop in just a few seconds - quite often before the driver would have even realized something was happening, and way before they could have reacted to prevent an accident.

I'm guessing once the technology is widespread, we'll see almost total elimination of rear-end accidents on the freeway - the technology really is quite amazing.


Ideal human reaction time is around 200 milliseconds[1]. At 60mph that's more than 1 car length.

How many milliseconds does it take the 5 Series to notice what's going on and start applying the brakes?

[1] If you are a) early 20's, b) male, c) responding to a audio cue, and d) doing exactly one thing on the cue (as opposed to making even a very simple decision based on the cue), you can get down to 150ms.


On the freeway, the time it takes to react to seeing the next car's brake lights isn't the main problem. The bigger problem is how long it takes to judge that car's deceleration.

Are they just tapping the brakes because the road is bending---some people always do---or are they slamming on the brakes because there's a traffic incident that you can't yet see? You're 200-millisecond reaction needs to be completely different in these two cases, but you just can't immediately tell the difference.

Both for people and for automatic cars, judging deceleration is a harder problem than just reacting to a red light going on, but the sensors on automatic cars should give them a big advantage.


> Ideal human reaction time is around 200 milliseconds.

Sure, if they're not looking down at the radio or at the accident on the other side of the freeway.


And then there's the time to actually make your movements. Getting your foot onto the brake pedal and pushing on it takes appreciable time, both for the foot to make the movement and for the nerve impulses to get there to make it move.

I recall from looking into this stuff a bit that a good reaction time from "oh shit" to "brakes are applied" is around 750ms. Accounting for people who aren't fully on top of their game (distracted, tired, injured, old, etc.), road designers assume more like 1.5-2s delay for applying brakes. And even that will be optimistic in some cases, as many car crashes demonstrate.


There's that, but also the realization that you need to brake "as hard as possible"-- we're not trained to do that often so there's additional delay before that happens.


Hence why he said "ideal".


And my point is that talking about the ideal human response time is absurd in the context of driving.


Why? It provides an lower boundary on the human response time. It's significant because the ideal response already represents _at least a whole car length_ of travel distance. We don't even need to delve into what a realistic response time is: it's already so bad that we should be looking at alternatives like our machine-overlord-self-driving-cars.


The protection is useful because for a highway rear-end accident to occur, a chain of at least two events has to occur. The car ahead of you has to significantly decelerate, and in the same window of time, the rear-ending driver has to be distracted for longer than the time it takes to close the gap and stop. Most of the time both of those events don't occur at the same time, but just look at traffic accident reports to see that in areas with heavy traffic - the probability that both events occur over the population of cars in traffic is 100%+ (because it happens multiple times every day).

Adding some sort of auto brake feature adds another layer of safety (presuming that the ratio of 'saves' to added risk is very high with this tech.)


With autobrake and safe following distance, even approaching 25% of cars using the systems would drastically reduce the amount of multi-car pileups.

I think a recent one in the news was 20+ cars. If just one of those cars had autobrake and safe following distance this could have been halved or quartered and wouldn't have blocked a highway for a few hours.


You will never be as fast as software, hardware and sensors.


This was the point I was trying to say, but I think I didn't communicate it well.

I would like to know just how fast these cars can react. I'm betting they could be actually slowing the car before a human is even aware something is wrong, but I would like to have some numbers to cite.


Also - keep in mind that this type of Driver Assist is (in the scheme of things) a very straightforward problem compared to others. It's pretty easy to calculate what type of braking is required to avoid running into something in front of you based on the relative velocities.


Not sure about the Model S, but Google's Prius and Lexus autonomous vehicles use a radar sensor on the front of the vehicle that can see past a semi tractor trailer in front of it, providing sensor input a human would never have.


Ideal humans do not cause accidents, normal humans do and normal human reaction time while driving is not around 200 milliseconds.


Better than what? The average human driver?


That seems to be the logical comparison.


Yeah but can the computer be situationally aware to see the kid in the 2003 3 series bmw ahead of him on his cell phone and preemptively switch lanes when his head bobs down to send another text to the furthest lane away from him before he ever has a chance to pose a risk to you in the first place?

We can increase the efficiency of reactionary driving as much as we want with computers, but for me, you just can't beat defensive driving.

Just curious, How well do these systems handle potential rear threats as well? How would it handle tailgaters? Some tailgaters are more aggressive than others. Some just need a friendly reminder that you do not appreciate the tailgating by a nice easy slow down, or a slight tapping of the breaks. Others -- such as the guy in F250 super duty with 30 inch tires and a lift kit, you might just want to go ahead and move over when you see them coming.


But the gauge for success shouldn't be whether it handles every situation better than humans, it should be whether the use of these tools leads to a lower rate of accidents/injuries/deaths. Just pointing out that there might specific situations where humans would do better, while downplaying all the situations where an automated system would do better, isn't all that useful. I mean, it works on an emotional level, but we should try to be objective by looking at the numbers.

Saying everyone should just drive defensively isn't a solution. The evidence for that is the fact that that's what we currently try to tell everyone to do, and it's still a problem that people drive poorly.


Right.

Saying everyone should just drive defensively is most definitely not a solution.


I'm not really sure what you're implying...

We need better enforcement? We try to do that by writing laws and issuing tickets. And, again, poor driving is still a problem.

How about we FORCE people to drive defensively? That sure sounds like what automated driving is trying to do.


> Yeah but can the computer be situationally aware to see the kid in the 2003 3 series bmw ahead of him on his cell phone and preemptively switch lanes when his head bobs down to send another text to the furthest lane away from him before he ever has a chance to pose a risk to you in the first place?

Probably about as much as the average driver. Hell, a lot of drivers on the road don't even notice a turn signal and get pissed that they're stuck behind a turning car in the lane.


Around here, I'd say the average driver probably thinks that an active turn signal is just faulty wiring.


For the foreseeable future, there will always be scenarios in which a human could perform better than an automated system. What I'm talking about here though, is the 6,000-7,000 accidents/year that could have been prevented by an automated system that is never distracted, and has <10 millisecond response timing.

What I worry, is that these systems will (of course) have bugs, and they are mechanical systems, so they will also physically fail - and, they will cause some deaths (cars are dangerous - 36,000 fatalities/year) - - but if for every one of those bug/physical failure deaths, you have 10 where the automated-system avoided an accident - that's a pretty big net savings in human lives.

And, software/hardware systems are only going to get better - that's certainly not the case for human drivers. Indeed, with cell phones, regardless of whether they are hands-free, there is research to show we are becoming less capable drivers than we were 30 years ago.


For the average driver who's on their own cellphone instead of doing "defensive driving", the system will do better than them. Professional drivers will still drive manually. Most people aren't professional drivers. (A reverse computer-to-car-analogy: most people don't know enough to know what programs are safe to run. Better something like SmartScreen/GateKeeper for most people. Professional computer users can "drive manually." But don't force everyone to do it, just because you want the option. They'll suck at it.)


If it can effectively respond to the dangerous moves of the kid on his phone, does it need to be as forward-looking and defensive as that?

Tradeoffs are different between computers and humans. Defensive driving is about trying to stay within an envelope where you can safely respond to whatever happens. A computer system will have a different envelope, so it won't need to do the same stuff.

To make a terrible analogy, a human pilot flying by eye needs to eventually make a decision to land at an alternate airport when his primary is fogged in. Is a computer smart enough to make that judgment call? Well, if the computer is equipped with a zero-zero landing system and can safely land in the fog, who cares?

Your question about rear threats is intriguing. The proper response is a lot less clear there, I think. Do you speed up? Slow down? Change lanes? Squirt them with washer fluid? Drop some caltrops from your rear bumper?


I'll be sure to set (hack, reprogram?) my future self-driving car to dispense liberal spraying of washer fluid after finally weaving around and getting in front of the asshat going 54mph in the left most lane on 101.


I can seem to reply to derefr, but to echo his comment, this will be a safety measure in the same way that Apple's TouchID is a security measure.

We know that a fingerprint is a bad password.It can't be changed, you leave copies everywhere, authentication is based on an approximation, etc. Strong, frequently changed passwords would be much better. But is it significantly better than the security on most people's phones: a 4 digit passcode, or none at all.


Who downvoted you? This is a very serious issue. A partial autopilot for cars is worse than useless: it will create distracted drivers who can't react when the car encounters something it can't handle. Self-driving cars need to be either 100% or 0%.


I used to work on automatic driving. I ran Team Overbot in the 2005 DARPA Grand Challenge. (We lost, but we didn't crash into anything.) So I'm painfully aware of the problems of automatic driving.

You need to be both looking at the road with cameras and profiling it with LIDAR. (Or terahertz radar, once that gets going.) It's not enough to just sense the car ahead. You need to be able to detect potholes, ice patches, junk on the highway, small animals, and similar problems. We could detect and avoid potholes back in 2005. Since we were doing off-road driving, that was a normal driving event.

The reason for a high-view LIDAR is that you want to see the pavement surface ahead from a reasonably useful angle and get a 3D profile of the road ahead. Google uses the Velodyne spinning-cone LIDAR scanner, which is a lot of LIDAR units built into one rotating mechanism. That's a research tool. There are other LIDAR devices more suited to mass production. Advanced Scientific Concepts has a nice eye-safe LIDAR which can operate in full sunlight. It costs about $100K, but that's because it's made by hand for DoD and space applications. The technology is all solid state, not inherently that expensive, and needs to be made into a volume product. (Somebody really needs to get on that. In 2004, I took a venture capitalist down to Santa Barbara to meet that crowd, but there was no mass market in sight back then. Now there is.)

You can only profile the road out to a limited distance, regardless of the sensor, because you're looking at the road from an oblique angle. Under good conditions, though, you can out-drive the range at which you can profile the road. That was Sebastian Thrun's contribution, and won the DARPA Grand Challenge. The idea is that if the LIDARs say the near road is good, and the cameras say the far road looks like the near road, you can assume the far road is like the near road and go fast. If the far road looks funny, you have to slow down and get a good look at the road profile with the LIDARs.

Automatic driving systems have to do all this. "Driver assistance" systems don't. Hence the "deadly valley".

That's just to deal with roads and static obstacles. Then comes dealing with traffic.


But why LIDAR though?

Humans proved that for current level driving two not particularly high-resolution cameras are sufficient. Seems like pushing in this direction will remove this expensive component?


Humans, also posses a listening system, balance system, a highly advance pattern recognition system filled with auto complete from a huge database of pictures (which to this date hasn't been replicated - face recognition doesn't count it needs to recognize cars, signs, people, animals, pavement, trees, obstacles, etc.), not to mention knowledge of various possible scenarios, various models of how their body/car/traffic works, etc.

You get to use worse hardware, but you need several order magnitude better software.


> a highly advance pattern recognition system filled with auto complete from a huge database of pictures (which to this date hasn't been replicated - face recognition doesn't count it needs to recognize cars, signs, people, animals, pavement, trees, obstacles, etc.)

I expect that, after the first wave of clumsy LIDARing self-driving cars, all the car companies (Google especially) will be collecting exactly training data from the cars' sensors to build exactly this kind of model. In fact, I wouldn't be surprised if that was what the Google car was really about, in the same way Google Voice is really about collecting speech training data.

The best part of this kind of training data is that it all comes pre-annotated with appropriate reinforcements: even if the image-recognition sensors aren't hooked up to anything, they're coupled to the input stream from the other car sensors and the driver's actions. So you would get training data like

- "saw [image of stopsign], other heuristically-programmed system decided car should stop, driver confirmed stop."

- "saw [image of kitten standing in the road], other heuristically-programmed system decided car should continue, driver overrode and stopped car."

Etc. Aggregating all these reports from many self-driving cars, you could build an excellent image-to-appropriate-reaction classifier.


Yes, but with voice data it's ok if the system gets it wrong occasionally. Worst-case scenario is the user gets annoyed and tries again (or gives up and does something else).

In a driving situation, the worst-case scenario is everybody dies.


Same worst-case scenario when humans drive daily, but wrong less often.


I would guess that the processing power is all that matters. It's not difficult or particularly dangerous to drive without being able to hear. I would guess that people driving remote controlled cars with 360-degree views but no other cues would perform very nearly as well as real drivers.


I think the key point is that it is very difficult for software to parse view-from-windshield images like this: http://blogs.bootsnall.com/chaskaconrow/files/2006/02/Chaska...

The human eye can instantly recognize the available driving paths, the motorcyclist ahead, and project where people will walk. Software would have to parse out where the open roads are, how far that motorcyclist is and whether he can clear the intersection before the car reaches it, and what that sign on the right-hand side is—using the same information, but it has to parse it first whereas we do that almost instantly. It's a totally different game.


Yes, that's what I'm saying. I'm saying the other sensors the parent post mentioned weren't actually important with regards to driving, just our ability to parse the visual data into a meaningful model of the world around us.


I think hearing is also useful, from time to time. It's not AS critical as sight, but if nothing it allows drivers to share their emotional state in a very primitive way and to gauge how their engine is performing.


Not to mention a hard-wired muscle-memory response system for crisis situations.


While driving, humans assume that the road ahead is OK (at least without significant potholes, and in sunny hot weather no ice-patches). We would expect better of robots (i.e. if a human crashes because of an oil patch, (s)he's a bad driver; if a computer crashes, it's a million-dollar lawsuit).

Edit: a better solution would be to observe the behaviour of other drivers; if there is someone driving ahead of you, you can assume that the road between you and them is OK; if there's noone ahead of you, you need to drive slower and be more careful (that's how I drive at night). Once there's a critical mass of cars with cameras, cars could communicate road conditions automatically.


> Once there's a critical mass of cars with cameras, cars could communicate road conditions automatically.

I would be frightened to trust the data coming from a random car in front of me. Inferring road condition from another car behavior sounds reasonable, using data supplied from it, not so much.


You shouldn't and wouldn't rely unfailingly on what other cars merely report. If the car in front of you insists its maintaining speed while your own readings indicate its slamming on its brakes, you should assume it's slamming on its brakes.

However, if the car three cars in front of you just broadcast "I'm doing an emergency stop right now", that's really valuable data. The human in your car won't know anything is wrong for at least a second. The human driver behind you would know about it before the human driver in front of you.


> However, if the car three cars in front of you just broadcast "I'm doing an emergency stop right now", that's really valuable data.

Until some asshat implements a button that sends this message for "lulz", though that would probably constitute a crime under existing law.


That will be the most common failure mode for computer-driven cars: how easy they are to bring to a stop. (And, yes, that's probably criminal behavior.)

A computer-driven car, though, wouldn't (shouldn't) just immediately slam on the brakes because of that signal. It would tighten seat belts and start slowing down, but it also would want to avoid getting rammed by the car behind it. It can make very accurate estimates about its stopping distance and use all of it.


The internet works that way, and you seem to be fine with that. :\ Is it the "saftey issue"? i.e the internet can't crash you into a wall, it can only send you to rotton or steal your credit card...


Wow, no. I don't trust the internet to give me real facts about elephants, let alone anything life-threatening. http://en.wikipedia.org/wiki/Wikipedia:Wikiality_and_Other_T... If the equipment had some built-in tamper detection, and Google's sensors digitally signed data if they didn't detect tampering, then I might trust it enough to drive with.



So if a car, driving right ahead of me, leaking oil when cornering...


"Seeing" and "Perceiving" are likely very different. Yes, we only have binocular visual input, but the excess in processing in the brain takes perception to another level. However, machines have trouble with the perception part and so have to make up for it by seeing in excess.

That's just my theory on it.


> But why LIDAR though?

The simple answer is that we don't know how to do it with stereo vision alone yet. Getting range reliably is hard, and your brain uses lots of tricks to do it.

The second answer is that we need better-than-human performance if this is to take off. So using human-type sensing might not be good enough anyway.

Lots of researchers are pushing on vision-based driving, though.


Human eyes are actually equivalent to very high end video cameras, and the image process that you can do in your squishy grey 10 watt processor is still way better than anything we can do with computers. You need your navigation system to be able to directly sense in 3 dimensions for it to be competitive.


Not really: we have hi dpi (and in focus) resolution only in the narrow field of view in the middle, everything else is not that good. We compensate for this by having ability to quickly move eyes and refocus.


FYI human eyesight is near 24MP equivalent. Its actually pretty high res.


Well, it's already in consumer-level (ok, "prosumer") cameras. Also we know that "fps" is of an eye is around 60 Hz - since it's the minimum that monitors looks OK.


When you can fit an exaflop electronic computer into a car, then maybe two cameras would be sufficient. Right now, we have to make do with less, and better sensors can make up the difference.


There is no exaflop computer in the human though - we just don't have enough energy to power it inside us.


That's a rough estimate of how much computing power is in the human brain. It's extremely efficient energy-wise, but massively parallel and weirdly put together so not entirely comparable to an electronic computer. Still, the computing resources available to process the images from the human eye are enormous.


Maybe we should add LIDAR to humans too.


Humans fail, it's called "crashing".


"You can only profile the road out to a limited distance"

Though, with many self-driving cars perhaps the sensors on each could share info, thereby mapping a larger (if not, endless) area.


Oh man, the vulnerabilities of using likely untrustable networked sensors for safety critical operations boggle the mind. While the sensors theoretically could be made trustable, for some level of trust, I would be very wary of trusting at the level of safety critical application. Inadvertent vulnerabilities and attacks by malicious actors would be catastrophic.

My prediction: "crashdummy" will eclipse "heartbleed" and "shellshock"!


As someone with actual experience, do you think self-driving cars are in the near future (decade or two), or are the technorati deluding themselves?


I think they're not that far away. Google is talking about starting with small self-driving cars with a top speed of 25mph or so. (At that speed, you don't have to drive out of trouble; an emergency stop is sufficient.) I expect those will be common in retirement communities in 10 years or less.

Tesla is talking about automatically putting their car into a garage. That's a good application; it's slow, and you can have sensors all around the car to avoid hitting anything. A more general system that can put a car into a big parking garage or lot is quite possible. Cooperating parking garages might have some additional bar-code markers and maybe a data link for open space info.

The whole airport car-rental thing could be done automatically, using slow-speed automatic driving to bring the car up to a pickup point at the terminal, just as the renter gets there. That may be one way this gets deployed. (I proposed that around 2003, but after 9/11, the idea of autonomous vehicles in an airport seemed politically hopeless.)

Those are some ways this might be deployed. Everyone has obsessed on automatic freeway driving since the 1950s, but that may not be the killer app.


I just had a vision of a youtube video from the near future of some slow driving cars on automatic parking mode getting stuck in loops against each other. Somewhere in the background, a dog barks.


I remember reading an article years ago about automatic driving first coming to commercial shipping (i.e. 18 wheelers and other cargo vehicles) using a separated lane on highways, but that the political pushback (jobs lost) was making that a difficult sell.


Did you all have to do work with navigating through snow and heavy rain? I was under the impression that snow is a big challenge and heavy rain isn't much better. Does that still hold true today?

It is one thing to anticipate threats, but when the road is under snow how do computers make that intuitive leap people can?


When the road's full of cars spamming LIDAR, under a network of drones doing the same... interference and noise surely become a real problem?


The duty cycle on LIDARs is very low. If you're ranging to 200 meters, the receiver is only taking data for 1.2us. At 60 Hz scanning, the receiver is active for 72us/sec, or 0.0072% of the time. So in the presence of 100 other transmitters (worst case), you'll get a conflict 0.72% of the time. If the transmit time is randomized slightly (which I don't think Velodyne does, but a production device must), you won't get the same bogus reading twice in a row. Over three readings, if you throw out outliers, this problem should go away. <p> If the LIDAR data has too many outliers, it's necessary to slow down and only use data from short ranges. At some short range, the LIDAR will "burn through" any jamming from a more distant range, per the radar equation. I agree that on production vehicles, anti-jam software, as described above, will be necessary.


My experience says probably not. LIDARs are very directional at any given instant, and you need to filter out outliers anyways thanks to things shimmering in the sun.


Agreed, and similar accidents are helped by autopilot even on aeroplanes. See below as an example. While the main cause is not autopilot, it is heavily relied on, and certainly worsened the situation when it was suddenly cut off in the middle of the night.

http://en.wikipedia.org/wiki/Air_France_Flight_447#Final_rep...

"the crew lacked practical training in manually handling the aircraft both at high altitude and in the event of anomalies of speed indication; the two co-pilots' task sharing was weakened both by incomprehension of the situation at the time of autopilot disconnection, and by poor management of the "startle effect", leaving them in an emotionally charged situation;"


Personally I think the only things that mattered in that accident were poor training and poor UI.

First, one of the pilots held the stick back all the way down to the water even though the airplane was clearly stalled. This is completely inexcusable. The automotive equivalent would be holding the accelerator pedal to the floor while aiming at a brick wall. Even worse, the guy who did this was a qualified glider pilot, and glider pilots should have extreme familiarity with stalls and stall recovery since so much of a glider's flight time is spent close to stall.

Second, the control system of the plane handled conflicting inputs from the two pilots by averaging them together. There was no indication to the pilots that they were fighting each other. Positive exchange of control of the aircraft is another really basic thing about flying, even more basic than putting the nose down in a stall. This setup made it far too easy for the pilots to be unaware of who was really flying the plane. If the control sticks moved together, it would have been obvious to the other pilot what was going on.

One could put some blame on the autopilot in causing the one pilot to forget the basics of stall recovery, but it seems to me that the real fault was in not training for it sufficiently, since that's not something you do in normal flight in an airliner anyway.


There are obviously high profile tragedies like that one, but it don't see that as evidence that autopilot technology doesn't on net make air travel much safer and more reliable.


Obviously people believe that autopilot represents a net benefit to air travel, but there are far fewer planes in the sky than cars on the road, a lot more open space that is not impregnable by most foreign objects, and a ton more money per plane allowing for more sophisticated sensors and systems. There is a dedicated traffic coordination system for aircraft and each pilot takes instruction as given by the ATC. Planes are also constantly monitored by highly-trained pilots and mechanics and operations are closely supervised by multiple regulatory bodies.

A great deal of the required operations for autodriving are offloaded or irrelevant in autopiloting. I know we were talking about an isolated case of the unexpected failure of an automated piloting system, but if we consider all of these additional complexities in typical road driving * millions more cars with barely-trained operators, it's reasonable to suppose that those types of problems would occur much more frequently with autodrivers than autopilots.


That's absolutely not true. What if you are drowsy or distracted and the autopilot brakes for you an avoids an accident?

Keep in mind every car out there is currently built with a feature where you can accelerate to a lethal speed, then press a small button, and the car continues to travel at that lethal speed until the car runs out of gas, hits a brick wall, or the driver consciously disengages it. We call it cruise control.

If all we are doing is adding automatic braking and lane following to cruise control, I don't see how it can be anything other than better.


> I don't see how it can be anything other than better.

Because in the real world, people won't exercise exactly the same level of situational awareness, such that the autopilot only adds a safety mechanism. In the real world, it will replace a certain amount of driver attention. And the question is, is it always superior to human attention?

It's probably net superior, but I bet it will cause a couple of high profile accidents that a careful human could have avoided, and then people (or the government) will be scared of it.


After reading about the abysmal coding standards of Toyota in the court case regarding the deaths of those people in the unintended accelreration cases I am leery. Having worked with embedded systems before, I know how hard they are to get right. I just wish that all these companies would realize that it's better (for the greater good) to opensource this instead of try to get it right all on their own. Look at how much better the web has come since there have been a few major open source web browser projects.


> And the question is, is it always superior to human attention?

The more valuable question is this:

Does it perform better than humans in the most frequently occurring situations that lead to incidents, are those situations a large enough fraction of all incidents such that the number of overall incidents drop significantly, and will that drop be greater than the increase in frequency of less predictable situations.

If the above turns out to be the case, then there will be fewer accidents despite less aware drivers.


There has been a ton more downvoting on HN in the last month and it's getting kind of absurd. It's like a group of redditors hit the downvote threshold and just started going at it.

edit: case in point


I've noticed that saying something negative about Apple or, now, Tesla, produces a quick downvote. On Slashdot, there's a striking phenomenon that saying something negative about Apple produces downvotes after about 15-30 minutes. Unclear whether this is a fanboy problem or a commercial "reputation management" firm in action.


Is it just Apple, or do other companies not get much negativity anymore that gets commenters downvotes?


It's mostly by the employees of the respective companies themselves. I once was working in a company (now a popular help desk company, famous within HN circles even) where the co/founder called up a few of his employees to downvote a comment that raised a serious but negative tone about his company. And all the employees were smart enough to use proxies to downvote the comment into oblivion. So, always think twice if it's worth losing your karma in a fixed game.


> So, always think twice if it's worth losing your karma in a fixed game.

Or I could refuse to accept the premise internet points are important and realize that downvotes aren't a reliable indication of post quality.

I'm certainly not going to be bullied by companies in to being quiet because they might take away some of my internet points.


I value my precious internet points and feel emotionally hurt with every down vote.


The year is 2014: anxious business owners obsess over Internet Points! How do I get more, they ask?

Please RT.


That's absolutely ridiculous. I'm not saying that employees of a company won't downvote negative comments, but they're a tiny, tiny, tiny minority. As in, I'd put any amount of money on <1% of downvotes related to Apple or Tesla coming from employees of those companies.


Yeah, but if you can trigger a pile-on feedback loop, it could be enough.


> It's mostly by the employees of the respective companies themselves

Your anecdote is insufficient to support this claim. One example does not make this a common practice.


Isn't gaining karma also a fixed game?


Or magbe it's because Apple is routinely (and unfairly) maligned all across the tech sphere, especially in the comments, and Apple users on less polarized sites vote to compensate.


Huh? Apple and Tesla are both darlings of the tech sphere.


have you been to /r/technology ?


Does /r/politics accurately reflect the political makeup of the US? No. Why would you assume /r/technology is representative?


Or The Verge comments? Or Slashdot?


I can't speak for the verge, but I associate slashdot with rabid apple fanboyism.


If Apple was hated, you would see the same thing happen to them that happens to Microsoft. How often do you see big Microsoft news on the front page of HN or a Microsoft liveblog on Ars Technica? Almost never, where with Apple every word they say is splattered across the news for weeks.


The interesting parts of Microsoft get a lot of press. Consider how much coverage a hypothetical Halo 5 would get before release. The real issue is MS produces little interesting new tech. Granted, I actually like developing for C# far more than most systems, but embrace and extend just does not make for much interesting news.


Windows 8.1 release and Windows 10 announcements both got flagged off the front page multiple times before finally an article was successful enough to stick around for a few hours. Same thing with the Surface Pro 3 and Windows Phone 8.1. Meanwhile iOS 8 had an article for each new feature for days.

The admins explained it that the Microsoft news triggers the flamewar prevention they built, and it happens almost every time. Microsoft articles need admins to explicitly allow them on the front page, or they're gone in minutes.


As a driver of a car with these systems (Audi, having a 'Lane assist' and 'ACC'): The implementation is absolutely crushingly useless, every day.

Lane assist kinda, sorta works (I can choose if the car will actively try to stay in the lane and/or if it's going to alarm me by vibrating the steering wheel when I cross over into another lane). It's limited to the Autobahn though and really more often then not just doesn't recognizes the lanes. IF it does, it has the kind of protection you describe. Unfortunately that makes it even less useful. On a really straight road I cannot turn it on, because it's going to beep at me after 1-2 seconds. You need to drive (I exaggerate, but the idea's true) like in a Hollywood movie from the 60s (or the Simpsons intro), turning the wheel left and right for no reason.

(The ACC in this car is crap as well, but .. let's leave it with lane assist for now)

I am extremely skeptical that this system can work, in general, any time soon. Even in a Tesla.


>We're approaching the "deadly valley" - automatic driving that's almost good enough that the driver can stop paying attention.

Infiniti has a commercial out now that I see frequently while streaming TV that shows someone who should not be on the road but is enabled by the driver assist features.

Maybe one mandatory feature of driver assist should be that if the car intervenes more than 2 times in 10 minutes its next step should be to find a safe place to pull over and stop.


This. Why shouldn't driver assist be able to tell when the actual driver is drunk/tired/dangerously incompetent and pull over. It could even call their emergency contact or a taxi and disable the car.


Because no one would buy it. Who wants an inanimate object deciding that they are to tired to drive with no discussion or recourse?

And can you imagine the liability implications? "My spouse had an asthma attack/allergic reaction/severe cut in the of middle of the night and my car stopped me from driving them to the hospital. They died before the ambulance reached the house."


...and the reverse liability implications: "My spouse had an asthma attack and their car didn't pull over safely, so it is the [car manufacturer]'s fault that they crashed into a tree and died."


True enough. If you promise buyers that your car will safely stop if the driver is incapable of driving, you better get it right.


Not being a drink or erratic driver, I was thinking more of driving when I'm tired. It is possible to be a little too tired to drive safely without realising it, but I imagine the extra small adjustments you make when this happens - because you drift and correct - would be recognisable by the car.

It would be a huge pain if you are on a long drive and your car decides you need a nap! But it would save lives, possibly mine. Personal transport is a convenience. Personal safety is paramount.


Man oh man I hate that commercial. Every time I see it my blood boils.


Is this the commercial you're talking about?

http://www.youtube.com/watch?v=Id8CQ-vsojQ


Basically that one, but I feel like there was more of an intro to it. I will have to pay attention next time it comes on to be sure.


As a Brit that lives in a village with minimal public transport, the ONLY reason I want (fully automatic) hands-off driving is for when I'm drunk.


> The minimum safe level is probably a system that can get the vehicle stopped autonomously when it's headed into a situation it can't handle. Beeping the driver to take over is not going to work in practice.

I think the MVP for an automatic driving system—presuming we don't just get the AI version out soon enough—would be your suggestion (stopping the car when it can't handle a situation), plus passing control of the vehicle to the moral equivalent of OnStar. There'd be call-centres full of people remotely driving drunk/sleeping people's cars for them using the vehicle's instrumentation+the driver's pre-programmed route. Like a taxi service where your own car is the taxi.


I'm recalling some discussions with scientists in remote robotic surgery a few years back where I was told that for some truly high precision operations, the latencies become unmanageable past a point. I'd be curious to know if the state of the art has shifted at all, or if we really are starting to hit the limits of systems that require a certain bar of reaction time and precision.

Following from that, I wonder what bar Driving has, and as such, what distribution of "Controllers" we'd need to be safely within that, and if that would allow it to be viable.


Client side prediction would help a lot. Online racing games seem to easily handle 300ms latencies.

I fully realize that the difference between virtual racing games and real live humans in hunks of metal is massive. I'm just saying that work on dealing with latency in situations which are more predictable and have larger margins for error than high precision operations have been worked on ever since QuakeWorld.


Indeed, I would say that the work done getting e.g. OnLive to work would be extremely relevant to people driving remotely: if you can be competitive in an arbitrary FPS game streamed to you across the continent—in a direct adversarial contest of reaction time against people playing locally on their own hardware—then I'm pretty certain you can drive a car using streamed data. Especially since, unlike with games, drivers won't demand "excellent 1080p graphics and HD sound"; just something usable to get around. The car-side computer can do a lot of pre-processing of its own outputs to get the streamed data rate down to something useful.

The real problem, I think, would be making sure the (presumably cellular) connection to the car doesn't drop or suffer from latency spikes. If the car companies teamed up with the cell companies, there'd probably be big pushes to put 4G or WiMax microcells on every power-pole running down streets and highways, and even more in cities, to ensure cars remain online.


>>>Mercedes, BMW, Cadillac, Volkswagen, Ford, etc. already have that.

So? None of these manufacturers have a carbon-friendly drive train. The fact that Tesla added a software feature "late in the game" is no 'bonus' to the already-contenders. Actually, lets see Mercedes' obstacle-avoidance compete with the SpaceX/Tesla crowd. I'm willing to suffer the down-votes until such a point is reached that we all, conscientiously, acknowledge the docking maneuvers allowed by a SpaceX/Tesla co-operation. In the meantime I'd like to point out to you: software features as an "also-ran" is not really a dilemma.


Among the auto manufacturers offerings, some 'lane assist' systems can only handle the most simple of turns (taking you out of the lane, _very scary_). These systems are too simple IMHO with single cameras and or no mix of additional systems.

Tesla seems to want a system that is more complex. Things like "have the car come meet you from where it was parked", lane _crossing_ (let alone lane keep assist), and eventually on-off ramp driving. All with live updates rather than going into the dealer.

What makes this offering (potentially) different is the ambitious intended use of the system, something which would make even a Daimler-Benz lawyer cringe.


I find it so odd that automated driving and auto-safety features (adaptive cruise, lane assist, etc.) are being so aggressively developed.

These features have never once appealed to me and I can't imagine using them even if I had them. I find it very hard to square the "American Love of Driving" and the cultural connotations that driving has in the US with systems that relegate the driver to a passenger ...

Maybe for stop and go rush hour driving ... but beyond that, I don't see the appeal.


Americans love cars, but not necessarily driving them.

I think this pretty well sums up why most people hate it: http://www.qwantz.com/index.php?comic=2587


Trucking industry.

The pay isn't always great, humans require sleep, and freight needs to be delivered. I am still amazed trains are piloted but they are and that is far simpler to automate than trucks.

Once trucking becomes automated you will have trucks delivering at all times to warehouses, scheduling around traffic patters. Whereas humans are not necessarily best at driving at night, vision and physiological reasons, computers could drive at hours when traffic is always the lightest.


Lower insurance rates. And I don't care how much you love driving, no one loves to accidentally rear-end someone that just slammed on their brakes while you were distracted by checking adjacent lane prior to moving over to get to an exit. And automation can help both situations (when putting on the lane change signal, I want some type of alarm if a car is entering my blind spot, for example).


Not that I disagree with you about the American love of driving, but that notion is part truth and partially a creation of automotive marketing. Frankly, many cars aren't that enjoyable to drive, and driving an enjoyable car in traffic is usually not very enjoyable either.


Are the current generation systems in a state where they can recognise and deal with some high percentage of that 1% by bringing the car to a safe stop? I realise of course that stopping can in itself be dangerous in certain situations, but it's probably going to be safer on average than just barrelling on regardless of what's coming.

I can imagine that not being default behaviour because it doesn't match driving norms, but perhaps it would be better?


I don't think the term is misleading at all. Autopilot on aircraft does not remove the need for a pilot, it just maintains course and speed.


"Course and speed" are necessary, even vital, but not sufficient.

Two airline pilots fell asleep while cruising over Hawaii last February, flying past their destination toward open ocean for 18 minutes before waking up and returning for a safe landing, federal accident investigators revealed Tuesday.

Ref: http://abcnews.go.com/Travel/story?id=5042619

A packed passenger jet heading to the UK was left flying on autopilot when both pilots fell asleep at the controls, it emerged last night.

Ref: http://www.dailymail.co.uk/news/article-2432847/Both-pilots-...


How come in those situations doesn't the crew notice something's not right and tries to communicate with the pilots?


> We're approaching the "deadly valley" - automatic driving that's almost good enough that the driver can stop paying attention.

This sounds to me like the "technology right now is crossing a distinct line into doom" fallacy. It seems like the same could have been said when automobiles started being operated by average folks rather than professional chauffeurs, when the synchromesh was invented, automatic transmissions, antilock brakes, cruise control, adaptive cruise control, and so on.


We've seen this happen with airplanes. Where the autopilots result in the human pilots no longer being fully aware of their environment and context in that environment. So, when the autopilot alarms, the human pilot doesn't have the context of the problem and often makes a naive mistake as a result (a naive mistake that has fatal consequences).

There was a good article discussing this recently on here, a quick search should find it for more details.

So, I agree with the parent that semi-auto pilot like features are probably quite risky when intervention is required. At least current generations, but I think we can also make better designs that address these concerns.


> We've seen this happen with airplanes.

We've seen airplane crashes because of pilots' misunderstanding of or ignoring autopilot features. But have we seen air travel become more dangerous on net because of autopilot features? I highly doubt it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: