Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
California halts Pony.ai's driverless testing permit after accident (reuters.com)
138 points by Animats on Dec 14, 2021 | hide | past | favorite | 151 comments


Here's the crash report:

"On October 28, 2021, after turning right onto Fremont Blvd from Cushing Pkwy, the Pony.ai Autonomous Vehicle ("Pony.ai AV") performed a left lane change maneuver in autonomous mode. While performing the lane change, the Pony.ai AV came into contact with a center divider on Fremont Blvd. and the traffic sign that was posted on the divider. The Pony.ai AV suffered moderate damage to the front of the vehicle and the undercarriage. There were no injuries and no other vehicles involved. Fremont Police Department were called to report the incident and the damaged street sign. Pony.ai has subsequently worked with local authorities to resolve all issues related to the damaged sign."

It's not a serious incident, but it is a fully autonomous vehicle under good conditions going somewhere it was absolutely not supposed to go. So DMV revoked their license for full autonomous operation without a safety driver. They have to go back to testing with a safety driver.

[1] https://www.dmv.ca.gov/portal/file/pony-ai_102821-pdf/


Streetview positioned at a right turn from Cushing onto southbound Fremont Blvd:

https://www.google.com/maps/@37.4905025,-121.9488087,3a,24.2...

You can see the curb rashed up and the pretty improvised fix to the sign from a previous accident I suppose, since the map says March, 2021.

https://imgur.com/a/U8cokYd


It’s crashed right outside a Tesla factory.? There is a joke here somewhere.


Could Tesla have put up adversarial triggers nearby their own facilities to test, and it overlapped on whatever opencv or vision package pony.ai is using?

Using invisible-to-humans paint on signs for adversarial recognition attacks on autonomous driving systems would also be an anti-Tesla tactic not entirely unreasonable to consider.

"Tesla autopilot crashes right outside Tesla factory!" would be a desired headline for a certain crowd.

It'll be interesting to see what caused this, at any rate, probably less than 1% chance the above is what happened.


This one gives a better idea of why this might have happened, in the direction of travel, the white back of the sign is easy to miss with a white-sided box truck behind it:

https://www.google.com/maps/@37.4902551,-121.9483919,3a,75y,...

The pole at the end of the divider with the sign facing away, stretched-arrow street paint left-turn symbol, and the enormous expanse of intersection pavement are all a little disorienting.


I doubt that was where the impact was if the car was turning right from Cushing onto Freemont, since it’s 2 lanes over from the turning lane.


The report said it performed a left lane change after the initial turn, so it could still be.


What's amazing is that it's on Google Streetview already! Wow. Google sent a drone out to get a photo right away. Amazing.


Seems like maybe you misread something. I didn't suggest it was recent...I put the date in the post.


It’s not a serious incident for a human, there are lots of explanations for such a thing which are innocent. It is a serious incident for a robot for which running into a street sign in the middle of the road is a “never” kind of problem and indicative of a significant issue.


It’s not a serious incident for a human

And perhaps that's part of the problem. "Oh, this person crashed his heavy machine into something...it happens" seems to significantly undervalue the responsibility and risks of driving. It should really be a "never" problem for people, too.


FSD isn't even close to human capability yet so its a moot point. When FSD is as good as a human driver we can then think about quantifying how seriously to take its occasional malfunctions / mistakes.


“Do not drive into stationary objects” should be one of the more basic requirements for a self driving car, any company that messes that up makes me nervous about how they might handle more complicated situations.


Asimov's Laws for Self-Driving Cars:

1. Don't hit anyone

2. Don't hit anything

3. If forced to violate rule(s) above, preserve the life of the person making the car payments


That was part of spec of the traction/break of a vehicle I worked on. Passagers life was prioritized as the premise behind the decision was : "Outside the vehicle is an unknown, we don't know if it's a human, an animal, an object ; we know that inside we have human life (at least the driver, at most the hundred of passengers), so we give priority to what we know)


That sounds bad. Yes, you know that there’s a person in the car, but you also know that if the object in front of you is a person then they are multiple orders of magnitude more likely to be killed than the occupants. You have to weigh the uncertainty and the disparity of outcomes.

Sounds like a continuation of the automotive trend that pretends like the only person whose life matters is the occupant.


Some interesting moral questions can be asked about this setup.

The most basic one, if indeed the outside is unknown to your product, are you a moral person developing it.


> On Oct. 28, a Pony.ai vehicle operating in autonomous mode hit a road center divider and a traffic sign in Fremont after turning right

This is really bad, if their software can't even avoid hitting a stationary object then it isn't ready to test on public roads.


> This is really bad, if their software can't even avoid hitting a stationary object then it isn't ready to test on public roads.

Almost as bad as a Tesla crashing into a stopped fire truck. Why is Tesla “autopilot” still allowed on Californian roads?

https://www.wired.com/story/tesla-autopilot-why-crash-radar/


>Why is Tesla “autopilot” still allowed on Californian roads?

Because Pony.ai is level 4 autonomy while Autopilot is level 2. The human driver is ultimately responsible for any accidents in a level 2 system. There is no driver to intercede or to blame in the case of a level 4 system. This crash is the sole responsibility of Pony.ai.


A Tesla driver was killed a few years ago in 101 south in Mountain View by hitting the central part of the fork.

https://www.engadget.com/2018-03-24-tesla-model-x-driver-die...


That particular spot had been crashed into before and the crash buffers hadn't been properly replaced. This led to a normal crash turning into a fatality.

The driver of the tesla had complained in the past of the tesla swerving into the wall when passing that spot, yet still had his guard down on the day he died.

That the tesla swerved is down to the poor striping on the road. The stripe along the right side of the left fork was brand new and very sharp, while the stripe along the left side of the right fork was extremely dim to non existant due to excessive wear sustained during construction of the left fork and the bridge it leads to (connection from the fast lane of 101s to the fast lane of 85s).

The crash barrier has since been properly replaced and the striping redone with diagonal lines hashing off the gore so it's abundantly clear even to a camera not to drive there.

That said, the tesla self driving made a very dumb error no human driver would make.


If a self-driving mode depends on road markings being perfect to not kill me then I'd rather not have self-driving at all.


> That the tesla swerved is down to the poor striping on the road.

No, it's down to defective self driving.


> That the tesla swerved is down to the poor striping on the road.

Certainly not. It's not appropriate for a self driving car to make any assumptions about striping or any road markings being correct. It crashed because it wasn't capable of self-driving after all.


Well, it wasn't driverless. The driver was arrested for DUI. Its not even known in that article that autopilot was turned on.


The article goes on to say that two other drivers who said they had autopilot on also crashed into fire trucks… How many fire trucks do Teslas need to crash into for it to be considered a problem?


“Tesla AI detects imminent collision, optimises response time from fire department.”


One could argue that normal cruise control on normal cars would also crash into fire trucks. Ultimately drivers are responsible in both cases, it's their fault 100% end of story.

PS Tesla rolled a software update to detect emergency vehicles and slow down.


The cruise control on my Subaru probably wouldn't hit a fire truck since it has pre-collision braking.. but in any case, it's not sold as "Autopilot" or "Full Self Driving" so the limitations are pretty clear. There's a reason several countries stopped Tesla from marketing their nonsense like they do in the US - it turns out when you say things are self driving, consumers believe you and don't read the fine print.

https://www.reuters.com/article/us-tesla-autopilot-germany/g...


Most TACC systems won’t brake for objects going less than vehicle_speed-threshold, due to how radar works. You don’t get relative speeds and distance for a set of objects, you get a spectrum showing how much of what the radar sees is moving at each relative speed.

That’s also why most TACC doesn’t work below 30km/h or so. It can’t tell the difference between the car in front and a stationary object next to the road.

Edit: This is actually an interesting rabbit hole to go down, because on the face of it, it seems like a fairly straightforward system to implement. Then the more you think about how a 'simple' system should handle various scenarios, the more you realise that to actually do it properly, 'just keep a good distance from the car in front' requires most of the perception required for general free-form driving.


This isn't true (on Subaru at least, likely due to their binocular eyesight cameras...) The primary purpose of their suite of driver aides is to prevent accidents, so it's actually really good at that. There's plenty of videos to attest to that but here's an example of a 60kph stop when the system detects a cone:

https://youtu.be/0A4i4-xALVg?t=252

Edit;

I guess you did say braking with TACC is complicated -- but I think that's a bit of an edge case that only applies to Tesla. The Subaru version of TACC is ~fine but it never gives you the impression it will handle 100% of the driving for you, so you'd never leave it to navigate around emergency vehicles by itself. Maybe the solution to rarely dangerous self-driving is to make them only ~75% good so driver's are always aware?


That's pretty cool! Much better than the radar based TACC on the Hilux I rented a couple of weeks ago. I came away from that one with a strong impression that it was inconsistent in a scary way and shouldn't have shipped like that.

Do you have one (or experience driving one)? If so:

- How does it go with phantom braking and various objects on the side of the road?

- Does it distinguish between various on-road things? (eg. does it brake for a dog? a plastic bag? a puff of smoke? an anvil falling off the back of the truck in front?)

- How good's its understanding of lanes (eg. a very slow car that's in another lane but currently directly in front of you due to the road curving)?

- Does it brake for objects outside the lane entering it (cross traffic, lane cut-ins, pedestrians, vegetation moving around in high winds)?


Airplanes on autopilot don't brake, either, when objects appear in their path. They especially don't in the case of fire trucks, or your Subaru.

Tesla's Full Self Driving has been released only to a tiny number of beta testers.


If an airplane on autopilot encounters a Subaru, there are _much bigger_ problems than ‘not braking’.


It is important to note that suddenly, and against all probability, a sperm whale had been called into existence, several miles above the surface of an alien planet. But since this is not a naturally tenable position for a whale, this innocent creature had very little time to come to terms with its identity.


1) Most consumers have absolutely no idea how airplane autopilot works. Pulling a “well actually” on this does not remove the issue of misleading marketing.

2) Yes, and the videos I’ve seen from those beta testers are terrifying.


I actually think most people believe a pilot is needed to fly a plane with autopilot. Don't you? Who would want both of their pilots to be asleep?


No, I don’t think people believe that at all. A certain meme entered pop culture years ago that either the pilot is there for takeoff and landing, or the pilot is there in order to make the passengers feel safer. I certainly recall a lot of “planes these days basically fly themselves” articles. Neither of these are strictly correct about how autopilot works, but we are arguing about cultural understanding here.

Even taking the lesser of these two scenarios, this would imply that Tesla autopilot should be capable of doing most of the driving, with the driver just there for whatever bits you might argue are equivalent to a takeoff and landing. Given the stories of Teslas accelerating into stationary objects or totally losing track of the road lane, I don’t think even this limited understanding of what “autopilot” means is correct.


Direct quotes from Elon and the Tesla website:

> "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself."

> “Full Self-Driving Capability All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat. All you will need to do is get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar. Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed. When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself. A tap on your phone summons it back to you. The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions. As these self-driving capabilities are introduced, your car will be continuously upgraded through over-the-air software updates.”


Yeah, Musk blatantly lying about these things really doesn’t help their case one bit. It’s hard to hide behind “autopilot is just marketing” when their popular CEO is lying about its capabilities left and right.


Cruise control is literally just for (at least in a traditional sense) keeping your speed constant.

You're still responsible for watching traffic and using the brake or gas as needed, especially if there is some issue up ahead that requires a safety maneuver of any kind.

i.e. an emergency vehicle, like a California Highway Patrol vehicle, begins a traffic break 15 car lengths ahead of you. As long as you're observant, you'll notice that in time and either cancel the cruise control or make the right adjustment so you slow down enough to the break speed.


By the same line of reasoning, you could say that my car doesn't have an emergency oxygen mask, therefore airliners don't need them either. If the airplane goes too high for passengers to breathe comfortably, it's 100% the pilot's fault, end of story.


The pre-collision avoidance should have kicked in either way. My Subaru has saved me once or twice when a car in front of me braked harder than I realized, and they’re not even claiming anything close to full self driving.


My 2015 Model S does that too. And also often detects stationary vehicles that it thinks might be in the way.


The Subaru wouldn't stop for a stationary vehicle though. Most of those systems either don't work at all for stationary vehicles, or work far less reliably. They're mostly designed for the case you described - car ahead slows suddenly and the driver isn't paying attention.


Care to clarify what you mean here? Had a subaru outback with eyesight a few years ago and I tested it with stationary objects in path multiple times (cardboard boxes and such). The subaru braked and stopped short of the objects 100% of the time.


When the vehicle you're about to collide with is moving slowly, there are good radar returns, so you can use both vision and radar to make the decison.

When the vehicle is stationary, the radar data has to be heavily filtered because radar has poor angular resolution and there are a lot of other stationary objects, many that you don't want to stop for at similar angles, like overhead road signs. That means you're relying purely on vision, which alone has a lot of false positives and negatives, so detection thresholds need to be set much lower.

Doesn't mean it won't work. Just means it's less likely to work, especially in the dark and the rain, where radar would normally be a big part of the signal.


When I was driving a 2014 Volvo, it also automatically stopped before hitting a stationary cardboard box (up to 30 kph)


Not all these systems are radar. Subaru uses cameras, in fact.


> Most of those systems either don't work at all for stationary vehicle

Citation needed. Subaru claims their eye sight system will detect and attempt to brake for a stationary vehicle. Whether or not they’ll managed to halt the vehicle is obviously speed dependent, but obviously slowing down before the crash is vastly superior to maintaining speed.


>Why is Tesla “autopilot” still allowed on Californian roads?

Because Tesla suckered some idiots to take the blame if for example the car will instantly change direction and throw you of a bridge or in from of a truck, Tesla is saying that their customers are supposed to be ready for this cases .

For now only some Tesla alpha testers suckers got killed and no innocent person, but when an innocent will be killed I am waiting to see if the driver and hist lawyer will accept the blame or will fight the ToS claims.


well they do have a "safety driver", they only removing pony.ai permit to drive without one.


I interviewed at a self driving car startup in 2016 that was testing their early prototypes on public roads! They were also using their investor money to rent out some ten million dollar home in the Silicon Valley hills as their office. Big red flags all around. I declined their request for a second round interview.


>They were also using their investor money to rent out some ten million dollar home in the Silicon Valley hills as their office

Is this supposed to be a bad thing? Why?

The focus on the price of the house seems pretty strange in an area with a particularly high price-to-rent ratio. But I guess $20k/mo just doesn’t sound as spicy.


I think he meant that they rent out their own home to their own company to extract investor money from the company.


That would be a very strange detail to leave out. But sure, it’s the only way their comment makes any sense.

Otherwise, spending investor money on office space seems like a perfectly normal thing to do. The $ amount described doesn’t seem too unusual for the area, and a house might offer significant advantages over traditional office space for a startup that is working on cars.


There are plenty of other reasons if you think through it even a little bit.

* Are the founders or employees living in their office?

* Are they treating it like a work environment or a party house? Are the work, kitchen, and bathroom environments professionally maintained.

* Is there adequate parking or public transportation?

* Is it legal? Does it conform to HOA and city requirements?

* Is it centrally located for commuting workers and testing sites or the personal preferences of the founders?

* Is it scalable for a growing company?

Renting a mansion as an office throws a hundred red flags to anyone wanting a serious outcome from a startup.


Most of these seem like separate issues. Founders living in the office is weird regardless of it being a “ten million dollar house”.

> Is it centrally located for commuting workers and testing sites or the personal preferences of the founders?

The house in question was probably the comma.ai office in West Portal, SF (right next to the West Portal Station). Draw your own conclusions regarding commutes, it’s a pretty central location.

> Is it scalable for a growing company?

Offices are rarely very scalable, you just move somewhere else once it no longer works.


It was not comma ai. The house was in the hills of Silicon Valley and not in a central location.


Seems like setting up in houses is a common pattern for Bay Area self driving car startups then. I’d guess that the availability of decent office space with workshop bays is essentially nonexistent


Not in the South Bay where this company was. Plenty of machine shops, mechanics, etc with workshop bays imo. What was true at the time was that investors were throwing money hand over fist at self driving car startups, even to young founders fresh out of college. I would not discount the psychological appeal to a 20-something founder with millions of dollars a to burn to set up shop in a mansion.

The point of my original comment is not to say that doing so is objectively bad, but I had just left an abusive startup run by a young and inexperienced startup and I wanted something more stable. I ended up getting a job at a Toyota research branch in a boring office park and I played with a cute little robot. It was just what I needed at the time.


I've heard of tons of startups that do though, including some very successful ones.

Wasn't snap for example running out of a few houses on the la beach? Maybe they weren't mansions, but I'm sure the rent was high.


> * Are the founders or employees living in their office?

This is common for startups in the seed stage. I don't personally recommend it but I wouldn't consider it to be a red flag for a pre-series-A startup.

> * Is there adequate parking or public transportation?

There is practically no public transportation in the bay area. No, there is never enough parking.

> * Is it legal? Does it conform to HOA and city requirements?

Nobody takes a shit about HOAs. Startups exist to drill square pegs into circular holes. Hell, I'd love to create a startup to help people get rid of HOAs.


You consider HOA requirements and single-site scalability to be red flags for a startup?


You get it. It felt like they were inexperienced and burning cash unnecessarily. I had just left a very unprofessional and in hindsight abusive startup and I didn’t want to walk in to another “boys clubhouse” situation.


Anther explanation could be that many startups try to give out the vibe of "just look at how much investor cash we can burn, come join in on the fun!", and are very open about that. I can understand why you wouldn't want to join such a company, even though that is exactly what many others are looking for.


Sure, but now we’re just guessing. We could go on all day listing bad things founders could do, but the original comment simply doesn’t go beyond implying that it’s bad to use a $10M house in Silicon Valley as an office.

The price mentioned isn’t anything unusual even for fairly early stage startups in the Bay Area.


Well, when humans try to explain a culture that was alien to them they tend to use examples from that encounter. I can very well see the founder proudly saying "Look at this cool 10 million dollar office we have!", and then this guy latched on to that statement as being representative of their culture. When I work at an office I don't know what it costs, I doubt he would have looked it up if they didn't tell him, so this explanation seems likely to me. Of course to you that is not a good representation of the culture, so you protest here. Which is good, now we all know a bit more how to state things so others can understand it.


We're not guessing, at least not randomly, we're using intuition. Applying experiences and subtle cues to create something more educated than a random guess.

I see you have something like a half dozen comments now trying to rationalize this all cross the comment section, but like yeah renting out a mansion for your startup does send out a "vibe".

If you don't realize that a) you're probably who they're looking for, and b) you probably have some room to grow when it comes to picking up on subtext.

-

Not everything in life can be quantified in a neat little bullet pointed list. But part of being an experienced person is being able to pick out the more subtle aspects of situations.

To me a mansion as a startup HQ screams tech-bro 24-7 party culture with some major WLB issues.

And again, it's not like using a mansion magically causes that means that, but it strongly implies it.

It's like walking into an office and seeing everyone in suits and ties vs seeing everyone in sweatpants and baggy tees.

Technically the choice of clothes does not directly force you to behave a certain way, but your intuition should tell you that those are two very different cultures, and it should tell you a little bit about each.

tl;dr Ignoring intuition because it's not backed by unequivocal fact is not a super-power, and it's not really productive.


> If you don't realize that a) you're probably who they're looking for, and b) you probably have some room to grow when it comes to picking up on subtext.

Don’t get me wrong, I’d never work with the manchildren at comma.ai. I’ve interacted extensively with George Hotz in the past and he’s certainly not a person I’d want to spend any time in the same room with.

I just don’t see any issue with their choice of office space, it just seems like a perfect fit for their business.

> To me a mansion as a startup HQ screams tech-bro 24-7 party culture with some major WLB issues

Where else would you put your car tech startup in SF? Software engineers aren’t going to work out of a garage, and good luck finding a traditional office space with decent facilities for working on cars.

FWIW this is hardly a mansion https://www.redfin.com/CA/San-Francisco/41-Santa-Monica-Way-...

Comma.ai has a “proper” facility now, but they had to move off to San Diego for that.


> Where else would you put your car tech startup in SF? Software engineers aren’t going to work out of a garage

At the same time as the $10m mansion place I interviewed for a self driving truck company and they had their office located in an industrial park in a shop that looks like it used to be an auto mechanic with upstairs offices. That place was fine but I didn’t feel like working for a 22 year old CEO who had just left college. Back in 2016 investors were dumping money on self driving car startups but I wanted something stable.


> Where else would you put your car tech startup in SF? Software engineers aren’t going to work out of a garage, and good luck finding a traditional office space with decent facilities for working on cars.

In…a garage. I’m serious - there are spaces big enough in SOMA and other places that can fit a garage space plus open office space plus some conference rooms (I’ve worked in several of them).


To me it seemed more expensive than renting a boring office building with few workshop bays with roll top doors. So to me, they were burning unnecessary cash and this made it seem like they were inexperienced. The CEO was some university researcher who had done some self driving car work and then had investors throw I think more than $10m at him. I just got the sense that this guy was feeling flush with cash and wanted to burn some on a luxurious multi level mansion with a pool and a hot tub.

I had just gotten out of a startup where the guy treated the place like his personal clubhouse and had a really controlling and abusive attitude. I wanted to steer clear of any company that seemed inexperienced.

It apparently doesn’t look that way to you, but that’s how it looked to me and I see other people here can relate.


No, I just meant that they were using some extravagant home in the hills as an office rather than doing the less exciting thing of renting an office in one of the many office parks. It felt like they were wasting money on a lavish lifestyle. To me it seemed like an indicator of a lack of prudent business sense.

I had also just left a startup where the CEO treated the company like some boys clubhouse and I did not want to get in to a situation like that again.


Not comma dot ai, was it?


The bit about big red flags all around seems about right for comma, but I don’t think they’ve ever had an office in SV.

Pretty sure in 2016 Comma.ai was at 41 Santa Monica Way in West Portal.


No it wasn’t them.


> This is really bad, if their software can't even avoid hitting a stationary object then it isn't ready to test on public roads.

Just slap a Beta tag on it and she'll be right.


They took "move fast and break things" a bit too literally I guess.


Move fast and break bones


That software bug will buff right out


If it does that more than people then it's not ready. People hit signs and dividers all the time. It's hard to come to realistic conclusions based on one case.


A bad driver risks his own life so he has good reasons to try to improve. A bad self driving car program doesn't risk the founders life so he has no reason to improve it if regulations doesn't force him to.


It risks the founder's net worth so there are some reasons to improve, maybe not enough.


Quitting risks their net worth more, so they have an incentive to keep at it potentially regardless of results.


It risks the founder's net worth since we ban their cars from driving if they aren't safe. If we didn't then they would continue to run unsafe programs.


Depends on the details of course but I'd expect an autonomous vehicle to be much much better at not hitting things. Especially stationary objects.

People are bad drivers because they have limited vision, limited attention, are emotional and are easily distracted by all kinds of things.

Just because an autonomous car has a similarly bad track record as the average human, doesn't mean it's ready for the road.


It may be much better. This is a single case and we get every single case reported. The problem is that computer vision is not the same as human vision and at some point we may end up with a situation close to "this car drives itself perfectly apart from running over a completely obvious, well visible person once a year". And it's going to be a really hard one to process for many people.


Is that same lever being applied to everyone thats testing this right now, or is this making an example?

I know nothing about nothing, so it's a real question.


Well, Tesla's autopilot has been slamming into emergency vehicles on the sides of highways for years and the feds have finally gotten around to investigating things.

Their full self driving beta requires constant vigilance because it likes to randomly veer at pedestrians and light poles.

As a sidenote, it's irritating that Reuters keeps calling it an "accident." It was a crash.


Every other self driving company out there has the same type of incident.


After they got a permit to drive without a safety driver? I don't think so. The permit should be for when the technology is mature enough to never do these kinds of mistakes.


I think this one got revoked because the car was actually in autonomous mode without a safety driver in the car.


It's awe-inspiring how much more quickly a human brain can learn versus a computer. These companies have expended tens of thousands of engineer-hours of some one the best minds in the industry to build models derived from tens of thousands of hours of training data, and still I'd place money on a novice human driver with just ~100 hours of experience besting any AI in the majority of "strange" driving circumstances.

Of course, if that human brain is tired, stressed, and checking their phone, it's a whole different story :)


The driving part is usually normal algorithm code and not machine learning though. Machine learning is for object detection, what the software does with that knowledge is regular programming.


That's not quite right. There are learned models inside the planning systems of most of the sophisticated self-driving car companies. Here's a public video that gives a few high-level examples: https://vimeo.com/618478174

Full disclosure, I work for Aurora in the Planning group.


How do you know those models are safe? I know Google said they didn't use machine learning for the driving part.

Edit: To clarify, I wonder how you do this part:

> make safe, predictable decisions on the road.

ML models doesn't make predictable outputs. So you always need a system that can handle the ML model going haywire, since you have no way to prove that it wont.


This is probably a miscommunication issue. Waymo absolutely uses machine learning in the perception and prediction layers of the stack. Underneath that are a few layers of trajectory planning and controls code that don't necessarily need ML.

As for stack accuracy and predictability, they're active areas of research for everyone, and part of the secret sauce. There's lots of little mitigations scattered throughout everyone's stacks for various particular issues, but no silver bullet.


Very much so. The tens of thousands of hours of training is like the learning babies do while looking at jangling keys. Then the driving part is more static -- I'm not sure that it's entirely regular programming, there may be some training of models there. But my understanding is it's certainly not learning/training _while_ driving, and _almost_ certainly it's not even learning to drive. It's just doing more learning for object detection and tracking, like a baby watching the world from a pram.

So I feel like no matter how many autonomous hours these cars accumulate on the road, they have effectively 0 hours of "learning to drive".


I think the only (but very major) difference is that is that animals and humans learn general knowledge. Someone with 100 hours of driving also has 16+ years of additional general learning.

A driving AI has no general knowledge. It doesn’t actually understand what it is looking at.

It’s not possible to make up reasonable responses consistently when you don’t know what you are looking at most of the time, machine or human. Have you ever tried to solve a technical problem before you read the documentation covering the basic concepts? You’re basically throwing shit at the wall until something sticks.


> I'm not sure that it's entirely regular programming, there may be some training of models there

I doubt it, not anything serious at least. Machine learning isn't at the level where it can be used to navigate environments better than human coded algorithms can. Maybe if you had a datacenter worth of computing power to do it, like AlphaStar, but you don't have that in a car. And even if you had that I still doubt it, Alphastar had a perfect representation of the game and could simulate the effects of moves perfectly.


Agreed, I was trying to suggest that if it's not 100% static code, it's something like 1% models, 99% static code.


Well, to be fair, the human driver in comparison had to have a decade or two of learning before they could drive well.



I had about 16 years of walking and 14 years of cycling around gradually increasingly complex environments before I drove a car.

The car controls are not the challenge. It’s understanding what everyone else is doing and how to handle it safely while making progress and not surprising other people that is tough. And those skills are partially - only partially - transferable from your pre-car experience.


>I had about 16 years of walking and 14 years of cycling

There are young kids that drive carts and race and they don't confuse walls and solid objects. The reason we don't allow younger people to drive is because young people are bad at managing risk(I done some very stupid risky things with my bike as a teen, I could have broke my bones or neck),

Autopilot from Tesla that uses mostly or only cameras has the issue of identifying obstacles which is a much generic problem(that young animals and kids have it solved) so you don't need to appeal to the "age of driving" excuse for this guys.


Not to mention, it's not just driving that makes you a good driver. It's 16 years of life experience. You spend that time growing up riding around in cars and busses. Living in the world, knowing how things work, learning how people behave. Driving is so much more than just pointing your wheel the way you want to go and pressing the gas and avoiding objects.

So many people don't even know all the rules of the road, but still manage to drive just fine without accidents. There are different unknown rules of the road in all different places around the world, but most people figure out how to adapt to them in a very short time.


>So many people don't even know all the rules of the road, but still manage to drive just fine without accidents.

There are huge differences in insurance premiums that can't be completely unrelated to accidents. People like to talk about other factors, but still.


> I had about 16 years of walking and 14 years of cycling around gradually increasingly complex environments before I drove a car.

A lot of that has to do with humans not being born with fully developed brains. Animals can walk and navigate environments as soon as they are born, so there is no reason to believe that humans actually learns these things rather than those systems slowly maturing as we grow up.

The other part is that our pre-trained movement system is made for human bodies, not cars. So learning to drive a car is learning to use another mode of moving yourself. Also you have to learn traffic rules. But the other things like understanding environments etc you get for free for being a human, every single large animal can do the same. They wont learn the traffic laws or how to drive the car, but they know how to navigate environments without hitting things.


I feel like driving a vehicle is fundamentally different from "natural" activities, because it involves a large difference in how you deal with things ahead versus the sides.

Things in front are highly compressed, while things to the sides are not, and your ability to move sideways is reduced by mass and speed.

When I was a teenager, I went on a road trip where I drove like 12 or more hours in a day, which I could never do now. At the end, I got home and for a while I had a weird sort of tunnel vision that I don't know how to describe. It was almost like I was looking through a fish-eye lens or something; everything seemed distorted because of spending so much time concentrating on small lateral motions and things coming towards me at highway speeds.

Sometimes I walk routes that I also frequently drive and it reminds me how motor vehicles compress time and space.


Birds can fly through thick forests without colliding with trees, and flying also is very clunky and you can't change your direction quickly etc. If they collide they die, so the selective pressure is really strong for them to not hit anything. Cheetahs can also run at highway speeds on uneven ground and still do just fine.


I spent years riding a bicycle, but there are some fundamental differences between that and a motorcycle, never mind a car.

I'm doubtful that current ML techniques are that general.


> novice human driver with just ~100 hours of experience

So a 100 hour old infant?


Many animals can walk and navigate environments better at 100 hours old than todays self driving cars can.

Here is a 50 hour old horse, imagine if our self driving cars had this level of environmental awareness:

https://www.youtube.com/watch?v=NJEMh-_vaME


Horses (actually, all the equines) are able to stand within minutes of birth, and run with the herd within hours. This indicates how much is built in before any learning, a useful data point.

(Amusingly, while standing up is clearly built in - foals usually stand on the first try - lying down is not. I've seen a newborn foal try to lie down for the first time, and after moving a few legs, just collapsing to the ground. This makes sense - there's a survival advantage to being able to stand up quickly and reliably, while being able to lie down and rest quickly need not work as well.


>> This indicates how much is built in before any learning, a useful data point.

Animals have a heck of a lot of background knowledge like that. Statistical machine learning algorithms can't incorporate background knowledge [edit: not easily] and have to learn everything end-to-end. Hence why they remain dumb as bricks, compared to even simple animals (like insects, which are "simple" only compared to mammals, say).


This sounds like a start of a dystopian novel, where animal brains are being shoved into various robots. And that a researcher who's trying to stop it, ultimately fails and he's also turned into a robot.


I wonder what comes first, engineered biological brains that we can put in things like cars or that machine learning has progressed to the point we no longer need biological brains.

Biological brains are super cheap and efficient though. The brain of a fly can still do object recognition and navigate environments for basically free, weighs less than a milligram including cameras, and the factory to produce those brains is just putting a cell in a wet environment and it builds itself, the brain of an ant can build complex structures etc. It is possible metal brains will never beat those.


I seem to remember a short story by James Tiptree Jr. who had a somewhat similar theme. In the story it was children born with deformities who were given robotic bodies and then spaceships to navigate, if I remember correctly.


If only you didn’t have to pay them so much it could happen? (Only a little /s)


I occasionally drive in the NYC metro area, and I'm pretty sure a 100 hour old infant could drive better than some of the folks I'm sharing the road with.


>> It's awe-inspiring how much more quickly a human brain can learn versus a computer.

Do you think we live in StarTrek future already? Human brain is the most complex thing in the entire universe, as far as we know so far. You compare it to a what, a computer?


When/if it happens, though, that we get actual level 4 / 5 asynchronous, fast locomotion. That process has to start somewhere.


Again, who is going to be put in jail when the first person is murdered by a driverless car?

Because that's the only motivation for keeping things safe, if it's just a fine corporations will just cut a check and write it off and shrug.

Cyclist on side of road, pedestrian wearing the wrong thing crossing the road, etc.

Sure a human could have killed them too. But that human will be put on trial/jail.


Already happened: https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

The end result is that Uber ended their self driving car program and paid the family money, while the grunt worker who was put in the car without proper training got charged with negligent homicide and that trial is still going.

The person responsible for that program should be charged, not the safety driver, but that is how things goes. You can't just put random people in a car and call them safety drivers, and charging such a person with homicide doesn't really get to the root of the problem.


The problem is the justice system assumes 1 person kills 1 other person.

In reality there were 100s of people in the chain of events, from the safety driver being bored and watching their phone, to the person who hired them and didn't train them properly, to the person who invented the safety driver system and didn't think people would get bored.

Then you have the people who made the decision to turn off some of the subsystems of the car and then test it at night with no extra precautions, and every programmer who let that happen.

The the management of the company who provided the wrong incentives to their employees.

And that's before you get to the people who designed the roads, and the lack of social care to look after people who for some reason are homeless


> The person responsible for that program

The problem is it's not really that person's fault either if they were pressured by executives to have a demo in 2 weeks for some investors.

If the investors were of the "demo or die" mindset then the investors bear part of the fault as well.


The thought that drivers will be put in jail for killing someone (especially a pedestrian or cyclist) already seems optimistic. Often they aren't even charged, even in cases of extreme road rage where the collision is deliberate.


In my part of aviation (not in the US) we have a person responsible for each subsystem, and they very much risk going to jail if it turns out they've been negligent. If it's not safe, you don't sign the papers saying it is. People wouldn't dream of doing the equivalent of putting a prototype car on public roads with an aircraft here.


Well, technically murder requires a human assailant and intent, which the car won’t have.

Unless the programmer intends it anyway, then they are the murderer.

Same as a lion can’t murder a human. It’s just being a lion, doing lion things.


On the other hand, crows commonly partake in murders.

((I had to.))


Aren't there entire abandoned cities, and yet-to-be-filled suburb developments in the world? Are autonomous driving companies not using those, stuffed full of "stunt" drivers who are employed to do nothing but simulate regular city traffic?


Waymo built a bespoke, 113 acre facility for this very purpose back in 2017[0][1].

Full disclosure: Google employee, working on nothing to do with Waymo.

[0]: https://blog.waymo.com/2020/09/the-waymo-drivers-training-re... [1]: https://www.wired.com/story/google-waymo-self-driving-car-ca...


Google did that, but there are no regulations forcing everyone who build self driving cars to do that.


You really under estimate how much externalized risks some people are willing to accept. If people can save millions by risking others lives then many will do so. This is why we have regulations, such as this permit process which is basically the government saying "your technology needs further testing".


Heck, many industries are based on pollution or addiction: tobacco, alcohol, fossil fuels, casinos, sports gambling, etc.

Basically stuff the owners don't do, but they're perfectly fine if it happens to others (for the pollution bit, I'm reasonably sure the stakeholders and executives live in nice, low pollution areas).


The current state of self driving cars can't handle basic, well behaved, standard traffic.

There's no way it's ready for the sort of adversarial stuff you'd get hiring a bunch of people to drive around simulating traffic for a car. I guarantee it's a great way to find out how your car is at dealing with being cut off, at people running stop signs, etc - and there's certainly value in doing that.

Just, not while you still can't avoid hitting stationary objects along the edge of the road.


> The current state of self driving cars can't handle basic, well behaved, standard traffic.

Some self-driving cars can't.

At one end, you have the Uber self-driving vehicles which were caught committing numerous traffic offenses by dashcams and cyclist helmet cams within the first days of their illegally testing their fleet on public roads, and then one of their cars ran over a pedestrian that the vehicle's sensors detected but the computers misclassified, somehow - and Uber had disabled the SUV's own auto-braking systems that probably would have saved the woman's life.

At the other end you have Google's self-driving cars have had, I think one at-fault crash across the entire fleet in many, many years and millions of miles of testing? They've been involved in a lot of crashes, but every single one until their one slow-speed scrape with a bus was the fault of another driver. That was 5 years ago or so.

Somewhere in-between is Tesla's FSD "Beta" which, as has been shown by numerous youtube videos (at least, the ones that Tesla doesn't abuse DMCA to take down), behaves extremely erratically repeatedly in the space of less than half an hour in an urban environment. This includes veering off a turn at pedestrians on a curb corner, and in another incident veering toward a light pole. In general, it seems to have a lot of problems making turns.


Cruise and waymo can go tens of thousands of miles without safety driver intervention.


Airport shuttles already have driven millions miles. No driver seat nor wheel.


What would that accomplish? The goal is for these cars to self-drive in locations where people live, and that does include mapping those locations over many drives. Autonomous vehicles still have a good safety record, and California's willingness to suspend a permit when nobody was injured does signal to me that they'll take it very seriously when a company's safety record falters.


Here's one of them. It's a modern, former company town now used by DHS for counterterrorism training, but AFAIK it's not used for self-driving car testing. Maybe because it's too far from the Bay Area or it's just not well-known.

https://en.m.wikipedia.org/wiki/Playas,_New_Mexico


DARPA did that in 2007, testing autonomous vehicles in realistic urban conditions using an abandoned Air Force base in California.


I believe the first generations of driverless cars have to be overly cautious. If it can't be 100% sure that it's model corresponds to reality, and that nothing is going to jump out of the bush, then it has to stop. A safe car is going to stop or slow down at least once per trip, for the forseeable future.

I think the car should also talk about what it thinks, to get people to understand the limitations of the technology.

"There could be a child between those parking cars, slowing down"

"I cannot see the road markings, emergency stop"

"Road signs don't match my data, slowing down"

"Train mode activated, following virtual track"

Then, and only then, you can do the second step. Automated cars have the potential to be much safer than cars with human drivers. You can accept some accidents, in order to prevent many more. But you can only do that if the public and lawmakers have confidence in the technology. Right now it is way too much of a black box with wierd failure modes.


This reminds me of a situation I saw with another autonomous car company's vehicle in SF a couple weeks back. I truly didn't know what the correct move by the vehicle should have been.

The situation: The vehicle was on Vallejo crossing west to east over Sansome Street, a one-way, two-lane thoroughfare. For Sansome, there is no stop sign here, but there is for Vallejo. The vehicle stopped at its sign, then began to proceed across the intersection, until two pedestrians entered the crosswalk from south to north along the far side of Sansome. The vehicle stopped in the middle of the street to wait for the pedestrians to cross. There was no cross traffic coming, but what if there were? Seems like a situation ripe for a side collision, especially because Sansome is brisk at this point.

I really don't know what the capabilities of the sensors are, or how situations are assessed, so I'm assuming it did what it did based on its situational awareness. Wondered if the vehicle could have backed up to the stop sign it came from if there was cross traffic, or if it could have broken the rules and gone around the pedestrians if there was danger of a side collision.

The intersection: https://goo.gl/maps/TisMyYc1RhKgJWFf8

PS: there is a garage at that corner that almost exclusively contains Waymo vehicles, and this was not a Waymo car.


Still believe you need AGI for viable SDCs


Depends on how you define "viable self driving car". If that means it needs to drive at least as well as humans in every scenario you tend to use a car in, then yeah probably, there are so many places with overgrown roads, avoid getting stuck in sand etc. If however you just want it to navigate highways and city streets, then probably not. The self driving cars wont be perfect around people and human drivers, no, but pedestrians and human drivers can update their own behaviour to compensate. It took a while for pedestrians to understand how to behave around human drivers, the same thing will happen for autonomous cars.


Humans are generally predictable, even crazy ones, since we get to interact with them from day 1, after birth. Plus in many places there are decent tests in place that block some of the crazies.

Computers, not so much.


with all the billions being spent on fully self driving cars why doesn't a company build a fake indoor town where it can simulate various driving conditions, weather, lighting etc and put fake pedestrian everywhere and then have the AI be able to learn in an environment simultaneously more challenging (since you can throw all kinds of shit at it) and forgiving (since mistakes while training aren't hurting people).


What were requirements before getting the permit in first place? What is allowed/disallowed after obtaining permit?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: