I don't understand this approach to get level 5 automation right away. It's too wide a scope to implement. Why don't we start with an experimental inter-city highway where cars could run with full automation?
You drive up to the highway, and then relax until you need to get off - sounds like a great start to me. You can equip that highway with a boat load of sensors and transmitters to understand these fancy brainy machines. After a few years, build more such highways with the lessons learnt from the experiments.
By and large, there's no particular reason to think purpose-built highway infrastructure is needed for autonomous vehicles. Maybe to alert for construction zones but we're pretty close to having autonomy on limited access, well-mapped highways today.
The problem is that, while this would be a really nice car feature (and probably great for safety), it does nothing for the people who don't want to own a car and just want a car and computer driver at their beck and call. Unfortunately for these folks, the autonomous highway driving is relatively near-term while door-to-door autonomy is probably decades away.
We’re testing autonomous cars in my municipal and so far they suck.
If the lines aren’t painted on the side of the road, they can’t drive.
If they meet a road sign they don’t know, they can’t drive. We have several “special” signs In Denmark, but a lot of the times it’s because the sign was too faded/dirty/covered in snow.
Of leaves, branches and some such are on the road, they can’t drive.
If traffic misbehaves, they can’t drive.
If road quality drops, or there is big pools of water on the side of the road, they can’t drive.
Basically they can’t drive.
I can’t tell you what tech we are testing, but we are trying two different manufacturers and they are both “cutting edge” and we’re testing a fleet of 16 cars over 3 years.
Another municipaly managed to get an autonomous bus going in its city center though, so it’s definitely still coming, it’s just really really far away.
Interesting insight. I wrote my concern while living in Denmark as well.
About 10 years ago, I read a paper from an Institute of Transportation in Australia about the use of low power microwave transmitters along the lanes of the highway which can "guide" the intelligent cars along the highways and also the idea of a car mesh that can communicate learning( as in I found a pot hole here that wasn't in our map, be careful!)
That sounds a very practical approach of course with the government investment on roads. After all, it is the government's charter to build public infrastructure! The car companies need to work with public policy and not try to make something within their bounds.
Yeah, well, if it's too hard to paint a line on the side of the road, imagine what would happen with maintaining arrays of low power microwave transmitters.
I think they come with different problems and can help add redundancy to data input. I would be scared to sit in a self-driving car that depends only on visibility of lines at the edge of the road. However, if there are enough data points available to the car at all points in time, then it makes the probability of failure low enough.
Imagine a system that munches visual clues + RADAR/LIDAR data + Microwave or RF transmitters embedded in the road + GPS/SATNAV + data from other cars via local mesh network. Now, that I'd think about trusting a little bit. Also, I'd definitely be watching 007 movies while my car drives me :-)
it does nothing for the people who don't want to own a car and just want a car and computer driver at their beck and call.
Classic disruption (not in the sense that it is often used) would be to start with a niche so limited that it doesn't seem useful to almost anyone, and then expand from that use case.
If you could make a Tesla owner's commute from SF to Sand Hill Road a much better experience (via autonomous-only toll road), I'd say that's one hell of a start.
Dedicated infrastructure for autonomous driving is a non-starter. However, allowing full hands-off on certain roads, perhaps only in good weather, would still be a feature that people would spend a lot of money for. It's not clear at what point people will be legally allowed to watch a video on their commute but we can probably see a path to getting there.
This process to build such a highway in california is beyond even the considerable means of Elon Musk - just the land acquisition and environmental review costs are staggering.
I wish people would wake up and realize this. My hope is that all the hype around self driving cars will result in some momentum around more limited applications.
But it might be another of those the perfect is the enemy of the good type situations.
Because if your self-driving cars need special infrastructure to work you just have very inefficient trains. Autonomous driving is difficult because of rare situations. We already have solutions that work well in good conditions. To figure out what's missing it seems crucial to me to test them in real traffic on normal roads that they have to share with unpredictable humans.
I don't think he means constructing new highways for autonomous cars; just making sure the tech works flawlessly with a few existing, high-traffic, morning/evening rush routes. Would save a lot of stress from the daily commute and allow for a gradual ramp up in supported routes.
I would love to have a car that became an inefficient train on the highway. I believe tens of thousands of people in Los Angeles would pay a lot of money for one. It isn't the ideal for the future, but it would still be a very desirable product.
”The cars would be able to run in fully "unsupervised" autonomous mode on certain, pre-approved and pre-mapped freeways in their respective communities”. Volvo said that drivers would be able to fully disengage from the driving process, instead spending time reading a book or watching a video.
> Why don't we start with an experimental inter-city highway where cars could run with full automation?
> You can equip that highway with a boat load of sensors and transmitters to understand these fancy brainy machines.
Because car companies are not in the business of building highways is my guess.
The "starve the beast" strategy has crippled our government. If this were the early post WWII era we'd have major government projects to do this kind of infrastructure and do it well.
It's not only the US; this disease seems to be affecting all western nations. I feel that we'll have to wait for China to successfully deploy such a project, which will finally provide enough political coordination (read: envy) to convince some western nations to just copy it.
Nah, Singapore will just invest in the infrastructure necessary to make it possible when it’s not good enough for California yet and pass the laws and regulations to make it happen. Probably ban non self-driving cars at the same time.
I’d say our assumption that the big stuff should be done by the government cripples our thinking and stops us from encouraging more innovative private projects. I’ve lost faith in government in a way that goes beyond candidates and parties - I think the structure is broken. I do not want to wish any more power in to the hands of thieves. That is - Congress and those who buy the congress people.
I can't for the life of me understand why people would think that the sponsors/"corruptors" would act better without a government, even when parts of that government is corrupted by the private entities.
Also: if you want to see real corruption, just take a closer look at how regular small businesses operate. Befriend an employee of a grocery store. Or at a company selling lightbulbs, furniture, etc. I guarantee you from personal experience, your moral compass will go haywire from the stories you'll hear.
(Also, you'll unfortunately discover that a subset of customers are total assholes. These days I believe that working in a customer service, or other field that has you interact with a wide variety of people on a daily basis, is the best way to ultimately lose faith in humanity.)
I can confirm this. I used to work at a gas station and my bosses would regularly steal gas and write it off as a "business expense" when going to and from home.
Also, customers were the worst. Either from wanting free stuff because they came in so often to wanting preferential treatment because they were older, the vast majority of customers seem to want everything handed to them on a silver platter.
Nah, my company sells lightbulbs and many other things. Yeah some people are unbelievable, and the fraud... but most people are decent. We work hard to be ethical and I think everyone who works here would agree that we have a working moral compass.
Well in a way it's starving because it's become so greedy that it can't get enough food to feed itself anymore. Most governments create more debt than they can ever manage.
Why don't we start with an experimental inter-city highway where cars could run with full automation?
This seems perfect for an ICO:
Use an ICO to build an autonomous vehicle-only highway stacked with sensors etc. SF to Oakland for instance. Token value increases as more people use the road. Token has inherent value because it’s the only way to access the road.
Just a more efficient way of raising money and giving liquidity to the investors that believe in the project early-on. Except, unlike bitcoin the coin has inherent value because it is the only way to access the road.
I like this idea too, along with also adding a network so that the cars could all communicate to each other. I have always found horns and turn-signals to be lacking in the amount that can be communicated to other drivers.
The problem with an inter-city highway is that you would have to require every car driving on it to be able to recognize and interact with those sensors. That would be decidedly unfair to those who cannot afford to upgrade their car to go on that road.
So that does bring up an interesting question though, when and where are we going to have roads where only autonomous vehicles are allowed?
I remember seeing this done in the series Tek War, driving locally you drove yourself then entered a limit access highway and the car was able to take over.
the pros for all is that it is limited access, we already have many areas with HOV and Toll lanes that can be adapted, very simple driving conditions, easy to visually mark and electronically mark as well. Heck you could initially finance it by making them all toll lanes until widespread adoption / etc
Instead of letting cars drive by themselves, they should first learn to park by themselves. Eventually emitting standardized sound, so that bypassers know that the car is in an autonomous parking mode, so extra care should be taken.
The sampling rate of the sensors should be diminished, maybe halved or even more, in order to make it harder for the car to park.
The car should observe, also with a low sampling rate, how the driver drives the car, make it's own prognostics on how it would drive, and compare them with what the driver chose to do, in order to learn. In that case the driver should be able to indicate the car that a certain action he/she performed, should be forgotten by the car, flagged as an illegal or dangerous maneuver.
When a car can do that well, park and project, then the sample rate should get upped in order to feed the car with more reliable data. If it handles that data without hiccups, and increased reliability, then a next stage can begin.
> And in some areas, we are finding that there were more issues to dig into and solve than we expected,”
Who could've thunk it?! The people who think we're going to have real Level 5 autonomous driving in 2 years (not the "we're bullshitting you with Level 5, but it's really more like Level 3.5" kind) are insane.
There's no way we're going to simulate every single condition a car could encounter anywhere on Earth and get the cars to do the "right thing" 100% of the time by 2019-2020.
I'll be impressed if they even deliver Level 4 (working perfectly only on some types of roads) by 2020. But I think even then car makers will "encounter the unexpected".
It's going to take many years to test these things. And no car makers seem to even mention how they're going to address all the security issues self-driving cars will have.
There's no way we're going to simulate every single condition a car could encounter anywhere on Earth and get the cars to do the "right thing" 100% of the time by 2019-2020.
I like how you added "by 2019-2020", as if doing the right thing 100% of the time was ever possible - let alone the goal. Of course, it isn't - Level 5 is just as good as a human. And extremely rare situation won't be handled well by a human either.
The important calculation then is how many different types of rare situations are encountered, what it means to be a rare situation, how often all rare situations happen in total, and how much better or worse the human or computer does. When I hear about Uber’s cars having issues with the apparently rare situations of, say, street lanes being restriped in a different place, or with delivery vans parked partially in a lane, I realize that much city driving is a bunch of rare unusual situations strung together in a row that requires constant improvisation. Construction, wrecks, emergency vehicles, delivery vehicles, weather, power outages, traffic signals on the fritz, funeral processions, pedestrians, loose debris from other cars, blown tires in the lanes, faded lane lines, parking lots, traffic jams, farm vehicles, moving vans... I think we’re going to find out that an acceptable level of automated safety ends up making automated cars so poor at navigating around the real world that it’s going to be a commercial failure.
Problem is that self-driving company would get blamed for accidents for all of its cars. So imagine if you had 100,000 cars going around and you got sued for accident on every one of them. This aggregation of blame is core issue which otherwise gets distributed among millions of human drivers. This invariably requires that self-driving car must be many many magnitudes better than regular humans.
Even if companies figure out way to not get sued, its only matter of time until some very serious tragedy happens such as pregnant woman wearing same dress as color of sky crossing the road getting killed or car running in to school kids. Then there are obvious malicious usage such as modifying car sensors to fool self-driving system and purposely run in to people (cars as weapon scenario). One such thing and it could likely be the trigger for large public outcry, heavy regulations and finally game over for self-driving cars.
I think it might be more desirable to approach self-driving cars in more evolutionary fashion. We can start with self-driving only in less than 25 miles/hr scenarios such as heavy traffic OR sunny days on highways. Then we can start equiping our road networks with dedicated self-driving lanes, supportive beacons on roads and so on. Then gradually move towards make all lanes self-driving.
In machine learning cost of going from 90% to 95% is typically same as going from 0% to 90%. This is why every little percent point gets celebrated wildly after you cross 95%.
Agreed. I don't know what the percentage is, but there will certainly come a point when it will be cheaper to pay out when (a relatively few) people die rather than invest in further R & D. The stagnation in just about every other field of 'mature' technology more or less guarantees it.
If it proves to be difficult enough, that level will be when self driving cars are only marginally better than human drivers. That is an unlikely scenario, but it will probably stagnate at a far less perfect level than all but the most pragmatic dreamers envision.
>>> a point when it will be cheaper to pay out when (a relatively few) people die rather than invest in further R & D.
Not before a long time.
A death is really expensive, Americans are trained to sue like crazy, a car accident is a simple case to grasp and attribute to the manufacturer who has a lot of money to pay.
I'm not sure that the EU is known for particularly lax product liability laws either. Whether it makes sense statistically or not, I can pretty much guarantee that automated vehicles causing deaths as a result of stupid decisions (from the perspective of a human driver) will be shut down in a big hurry. At which point, it becomes public that it was known that the software had problems handling X, Y, and Z sort of situations but it was put on the road anyway.
The EU is regulated more by regulators. They will take some time to investigate issues and shut down the perpetrators.
The US has this "anyone can sue anyone at anytime for crazy damages" system. I'd expect any issue to quickly be brought to court and make an example out of them.
Last but not least, after the first death related to a self driving car, the second death will be on the journalists fighting for the coverage.
That was in a level 2 system, in which the driver is required to constantly pay attention. Many companies have shipped level 2 systems, and that one death is not the only one.
I like to drive around thinking about what it is like to be a driving (human) computer. Every time I touch the brake because I see a kid standing a few feet from the curb and a small dog on the other side of the street is running, I wonder when an electronic computer will understand that the child might dart across the street after the dog.
Of course, I've learned these things from growing up human, but only one computer needs to conquer a driving challenge, maybe driving on a snow hidden road (watch out for barely noticeable ditches on the sides of the road), and then all self-driving cars can do it. It will be interesting to see how long it will take to achieve level 5.
A computer won't be speeding on a residential street. It will identify a child standing on a curb for tracking but otherwise not react to it until it determined that the kid's trajectory would likely intersect the car's. In that case it would break as hard as it needs to bring a car to a quick and safe stop.
It probably shouldn't be going fast enough in the first place for it to be an issue in suburban situations. It should be able to react in time even if a dog/child appears from behind a large parked vehicle. This should also be true of human drvers, but probably isn't.
But yes, deer beside faster roads etc. should probably trigger precautionary measures to ensure reaction times will be sufficient.
I surely hope that we will be much less forgiving of machines beating our children to death because they had the temerity to play in front of their own house, then when adults beat children to death because they had the temerity to play in front of their own house.
It's despicable that anyone thinks it's acceptable, but it's different to say "you aren't allowed to beat that child to death when you're trying to get home after an exhausting day of work" than to say "Google isn't allowed to beat that child to death in order to sell a few extra taxis".
Because people and, perhaps, objects that can't be readily distinguished from people stand on sidewalks and at curbs all the time without an intent to walk into the street. There's a fine line between defensive driving and driving so conservatively that you're constantly applying brakes because "something" might happen.
Because a human driver slows down in order to account for their reaction time and ability to continuously track multiple objects around them. Basically, they know that by the time they realize the kid is about to get run over, it will be too late to stop. Computers are already much better at tracking things and reacting to them with millisecond precision.
(That's not to mention the usual disregard for traffic rules - and thus basic safety - an average human driver has, which other commenters have touched on.)
Humans have atrocious reaction time to become aware of danger/lift foot/press brake, whereas machines can react instantly
> Reaction times vary greatly with situation and from person to person between about 0.7 to 3 seconds (sec or s) or more. Some accident reconstruction specialists use 1.5 seconds. A controlled study in 2000 (IEA2000_ABS51.pdf) found average driver reaction brake time to be 2.3 seconds
There's no question that machines have faster reaction times, but the other part of the equation is predicting how much a vehicle should slow down in any given situation in order to reduce braking time. The parent was talking about looking at a fairly complicated scene and making a decision about reducing their speed so that they could stop more quickly if it becomes necessary. We all know that there's a big difference between driving by a child playing aimlessly and somebody walking predictably on a sidewalk. It's much more difficult for an automated system to make these sort of predictions than it is for them to react quickly to emergency situations. My guess is that automated driving systems will have to err on the side of caution and that this will result in seemingly (to the passenger) unnecessary slow-downs. I don't personally think that this is a huge problem, but it's fair to say that reaction time is only one one of several significant factors in avoiding accidents.
A computer can compute that the maximum expected velocity of that child-looking object on the sidewalk is min(hardcoded maximum velocity of a human, maximum observed velocity), and use that in order to quickly calculate whether or not it will have enough time to break, assuming the child-like object suddenly starts moving straight towards the road with maximum velocity.
Unless you expect rocket-propelled children being launched from the sidewalk, that's pretty much it.
(Of course the actual implementation within the entire system will be more complicated, but my point is - a computer can precisely compute what a human tends to intuit.)
That sounds like it would end up with us having a situation where every time there is a human on the path, the car needs to slow down to 5mph in case it decides to jump out into the road.
A human can tell if there is something going on at the side of the road that might make the human jump into the road. Machines are nowhere near being able to understand the context of many situations that allow humans to predict these things.
> My guess is that automated driving systems will have to err on the side of caution and that this will result in seemingly (to the passenger) unnecessary slow-downs.
How are you expecting to be able to regulate implementation specific details like this across the industry? How would you enforce entirely requirements like this? If it's left to invisible hand of the market I would assume that the demand would be higher for cars with more aggressive driving styles, that will get you from A to B faster.
360 vision doesn’t change physics, and even the best vision system doesn’t help you when a child darts ten feet in front of your car.
If you don’t anticipate the possible movement, you will be too slow. I have very little faith that we are anywhere near systems that can handle this kind of situational awareness, because it requires classification systems and object models of the world that modern AI has yet to reproduce.
The average level of human intelligence is a lot higher than most casual observers realize.
I would guess that they already identify and track pedestrians better than the average human driver (mostly because I expect they do it at greater distance). Whether they model kids standing by the curb better is hard to say.
A marketing document does not convince me. But even if I take this at face value, they’re talking about estimating future movements based on current trajectory, which is fine and good, but not the scenario raised by the OP.
If I see a child standing near the side of the road, I use more than the velocity and trajectory of the child to estimate future behavior: I look at the direction the child is facing, the overall situation (e.g. is the child playing a game?), what the child is paying attention to, and so on. A child waiting at a bus stop is a dramatically different scenario than a child looking across the street at a puppy. A short adult standing on the side of the road is dramatically different than a child.
This is a hard problem involving multiple levels of recognition and inference. I have little faith that it is solved. My suspicion is that the “engineering solution” is used (i.e. slow down when a human-probable object is near the road). That might work, but will lead to a car thar drives like a paranoid senior citizen with bad eyesight.
I have a suspicion that we’re going to look back in a decade and realize that most of these problems are fundamentally intractable, and that the best any system can do is react via human-encoded heuristics. If so, the path to full autonomy will be an asymptotic one; it will not happen quickly, but through decades of gradual refinement, with lots of fatalities along the way.
I think that the always-sensible speed choices of the automated system will result in a much larger reduction in pedestrian fatalities than any increase from the lack of subtle inference you are concerned with.
If you think heuristic contradicts “intractable” You don’t understand the meaning of the word.
Intractable problems can be solved sometimes, just not reliably or in bounded time. If we get to the point where self-driving cars depend on human-encoded rules for reaction, we’ll simply be trading one set of messy heuristic behaviors (people) for another (robots with bad sensors and limited domain awareness).
Will the automated systems be “better” with enough time and investment? Perhaps. But dreams of a fatality-free automobile future will remain science fiction.
An A.I. system with several
million cars will have this exact situation several times a day, so after a few months, they sure have a lot more scenarios that a humans has ever seen or would anticipate.
> it requires classification systems and object models of the world that modern AI has yet to reproduce.
i'm pretty sure that during millions of miles of say Waymo's video captured, there has been a lot of people (incl. children) stepping off the sidewalk to cross the road - you don't need that to happen right in front of your car - so their system analyzing the images does recognize the pedestrians and thus their potential for movement.
Then everybody will hate them because our expectations of the speed at which a car gets from A to B tolerate an amount of risk larger than the what BigCorp is willing to tolerate.
Nobody wants a self driving car that's always driving like a student driver.
It is an assumption that going to A to B with little risk will be slower. With the sensors available and lower reaction times the opposite could well happen. A computer can look every way at once at a stop sign, doesn't have to waste time making a judgement call because it already knows the answer, and doesn't have to wait for its foot to hit the break or accelerator.
>Personally I don't really care if it takes 5% longer to get where I needed to go or not.
>Especially if I get the huge benefit of not having to drive.
>Driving risky really doesnt speed up your commute very much.
A car behaving will be like a student driver (or delivery truck) at every intersection at which it needs to pull into traffic could easily double or triple your commute time depending on your commute.
Pulling out into traffic should be something that a driver-less car can be much better at. If it isn't at least as good as the average perspective customer people won't buy it.
I would say this is particularly critical metric for taxi fleets since they do a lot of driving on city side streets. People will take taxi with the human if it's faster.
The '03 Crown Vic has long since paid off it's capital costs and the cab company gets to blame the driver if things go far enough south for lawyers to get involved.
Until some mythical future where self driving cars are so good and common that the "progressive" states start providing financial disincentives for people to operate their own vehicles (which would be a pretty major about face the vast majority of all transportation and infrastructure related regulation to date) I don't see where the driver-less taxi has a cost advantage in the foreseeable future. Your insurance premium is in large part based on the presence of everyone else around you.
Children are absolutely large enough to be noticed. The issue is that car drivers think it's okay to ignore them. They're choosing only to look at cars because we train people from a young age to think of the black stuff as a space where only cars go. And because we make it illegal to drive safely. If you knew you were going to get treated the same for threatening a small child with a car as with any other lethal weapon, you'd drive much more safely.
> I wonder when an electronic computer will understand that the child might dart across the street after the dog.
It already does. Google colloquially calls it the "idiot detector". It includes things like small children, teenagers on skateboards, bicyclists, etc.
It was responsible for a bit of hilarity that when a hipster was rocking on his fixie at a stop sign, the car would start and stop entering the intersection.
Cars are probably better than humans at detection now.
You missed the point of the question. The point is that a human will see a child and a dog playing somewhat close to the road, will see the dog run out, and will infer that the child will dart after it before the child makes a move.
Self-driving cars see the dog and the child somewhat close to the road and classify them as a hazard IMMEDIATELY and start adjusting for them. And, if they lose track of them, the car goes into "Unseen Idiot" mode. You don't need the dog running out to focus their attention like a human does.
Self-driving cars don't have the attention span problems that humans have. Self-driving cars can watch more than 7 +/- 2 objects (much more) without diverting their attention.
Which means that self-driving cars can watch all 6 of those little kids walking, as well as the 4 on bicycles, and the two playing with the dog over there.
This is why self-driving cars will win ... and quickly.
Additionally, a self-driving car doesn't even need to care about recognizing children and dogs as members of their respective species. All they need is to notice there is a moving object, and to have some expectation of its maximum velocity. It's enough to compute the envelope that lets it safely avoid collision.
Waymo is capable of making these kinds of inferences, though I don't know if it can handle that specific case. From the safety report:
> Waymo’s planner can also think several steps
ahead. For example, if our software perceives that an adjacent lane ahead is closed due to construction,
and predicts that a cyclist in that lane will move over, our planner can make the decision to slow down or
make room for the cyclist well ahead of time.
It's funny I'd always conceived of AIs as being potentially better than humans, but this AI sounds more like an artificial Jeremy Clarkson, boiling with contempt
In an object-oriented language, specify the functions which an object may call, and how that object calling those functions (move("run","100%-speed","straight-forward","low-alertness")) might interfere with functions you may call (move("drive-car","2%-speed","straight-forward","full-alertness").
Pre-process a few of the child's available functions, and some of yours, to find any collisions in the data. Decrease speed on a gradient equivalent to the probability of collision. (Step 1 - access child's datastore, or be a similar-enough neuroprocessor that the same data is replicated to you locally.)
The number of objects in motion in a roadway space can make this processing prohibitive, which is why we failover to humans. Also, adult humans have more experience (data) at being a child, and so are much more capable of analyzing and predicting with this data, than a self-driving car - at least today.
Can someone point out what are the major issues for self-driving cars? What are the main things stopping GM/Volvo/Google from deploying a fleet of cars?
The process of autonomy, at every time instant, may be broken down into the following: 1) sensor observation, 2) perception, 3) intent modeling, 4) path planning, 5) control action. 1) Sensor observation is the collection of video, radar, Lidar etc. data. 2) Perception is the interpretation of that sensor data into a meaningful representation of the 3D environment, both static and dynamic, tasks like object detection, localization, tracking, semantic understanding (think of it like computing a physics enginge for the world). 3) Intent modeling is the prediction of what the moving objects might do in the future (e.g. is that car just drifting a bit, or it is about to merge into my lane?) 4) Given the outcome of 2) and 3), path planning is answering the question of where should I plan to drive the car through my estimation of the environment and how it might change? 5) Control is the execution of the planned path, by manipulating the steering wheel, gas and brake etc.
Of the different aspects of autonomy, perception and intent modeling are the unsolved pieces, with the other aspects being relatively well understood. The quality of your sensors (resolution, dynamic range, depth range for Lidar/radar etc) affect the difficulty of the perception task, as does computational power, but even with perfect sensors and high compute the problem is difficult (recognizing the difference between a rock and a crumpled piece of paper requires algorithmic processing of sensor data). The difficulty of perception is best illustrated by pointing to the field of computer vision, which is essentially focused on solving that problem. What seems easy to a human is quite hard for a computer, but really it's only easy at the conscious level, while in fact 70% of the human brain is dedicated to solving the vision problem at any given time.
All the steps after perception rely crucially on it. If perception were perfectly solved, intent modeling is also a difficult problem, but it is relatively easier than perception, as it involves reasoning in a lower dimensional state-action space, albeit with partial information. To make a comparison, intent modeling for diving in urban environments is perhaps harder than beating humans at Go, and may be as hard as beating humans at poker.
If perception and intent modeling are solved, the execution of path planning and control is relatively well understood.
To summarize, the main issues are perception and intent modeling, and these are fundamentally difficult AI problems. So the main thing holding back GM/Volvo/Google is algorithms.
> What are the main things stopping GM/Volvo/Google from deploying a fleet of cars?
I'm not an expert in the space, but it seems the main issue is the technology is still in (generously) alpha. Basically: the blocker is the technology doesn't work, for any definition of "work" that a layperson would recognize.
Speed control, lane keeping and basic rules are fairly simple. But recognizing a red light when the sun is behind it? Or the bulb is burnt out? Or power is down? Or your windshield is cover with water from a deluge of rain fall? And so on for every single condition, corner, intersection, etc.
The tail end. Every successful drive is alike, but every accident occurs in its own way.
It's why 'autopilot' Teslas drive into the side of semi-trucks and rear-end delivery vans pulled over on the shoulder. And we're not even talking about snowy conditions.
Level 2 systems require the driver to remain alert. I'm not sure how you can generalize from a production level 2 system to a hypothetical level 5 system.
Isn't this how most experiments are supposed to work? You start with a broad assumption and work towards it. Then you find issues and start scaling things down to focus on some finer issues.
In the meantime, business executives and media work hand-in-hand to try and hype everything and promising stuff which cannot be delivered in a short amount of time.
The thing is Volvo's experiments haven't been going as well as others.
In the race to build a functional autonomous vehicle some companies are getting it done, and others, in spite of big promises, several years of effort, and scaled, well capitalized operations have very little to show for their efforts.
Everybody still has a lot of work to do, but the operations that can, at the very least, demonstrate as proof of concept their cars handling just a few miles of of uninterrupted driving in dynamic environments have cleared the biggest hurdle. Following that big hurdle is the validation process, which is tens of thousands of smaller hurdles stretching out as far as the eye can see, but so long as they've cleared the first, biggest hurdles, you can be reasonably confident they'll get to a minimum viable product eventually so long as they keep at it.
I have confidence in Waymo, GM, and Zoox. With everyone else it's either too soon to tell, or I don't have enough information, or they're sucking:
I think you are on the right track here. I know some of the people hired by Volvo cars to work on this, and I am not aware of them recruiting top ML researchers to work on this and I suspect they are just not good enough people there. Waymo, on the other hand, will probably get it done, and done really well.
That's because Volvo's AV software position is a joint venture w/ Autoliv called Zenuity. https://www.zenuity.com
So if you're looking for CV/ML people working on the perception part of the stack, look there.
FWIW, I work in this field now and I have fairly low confidence in the non-ML parts of the software stack across the industry. There's a lot more to this problem than well manicured computer-vision demos, and Volvo is a lot less cavalier about loss of human life than almost anyone else, so it's fairly heartening to see them realign around realistic expectations.
I'm at a loss as to what it is about Volvo's progress (or lack thereof) that is non-obvious enough to warrant investigation into published experiments that don't exist.
Was your statement the thing is Volvo's experiments haven't been going as well as others based on any knowledge whatsoever? I'm just trying to understand what you are basing that on and how well-founded it is.
You drive up to the highway, and then relax until you need to get off - sounds like a great start to me. You can equip that highway with a boat load of sensors and transmitters to understand these fancy brainy machines. After a few years, build more such highways with the lessons learnt from the experiments.