I found it especially confusing how certain numbers were presented as the 2019-2021 delta and others the 2020-2021 delta without any way of seeing all the data together. Thanks for linking to the original source as when I looked for it last night I found it difficult to find.
One of the questions I've repeatedly had regarding FSD (and Tesla's approach in particular) is the notion of memory. While a lot of these scenarios are disturbing, I've seen people wavering on lanes, exits and attempting to turn the wrong way onto one-way streets. People have memory, however. If we go through the same confusing intersection a few times, we'll learn how to deal with that specific intersection. It seems like a connected group of FSD cars could perform that learning even faster since it could report that interaction with any car rather than driver-by-driver. Are any of the FSD implementations taking this into account?
This has been a common assertion about Tesla's "leadership" in the field - that they can learn from all the cars, push updates, and obviously not have to experience the same issue repeatedly.
It's far from clear, in practice, if they're actually doing this. If they have, it would have to be fairly recent, because the list of "Oh, yeah, Autopilot always screws up at this highway split..." is more or less endless.
GM's Supercruise relies on fairly solid maps of the areas of operation (mostly limited access highways), so it has an understanding of "what should be there" it can work off and it seems to handle the mapped areas competently.
But the problem here is that the learning requires humans taking over, and telling the automation, "No, you're wrong." And then being able to distill that into something useful for other cars - because the human who took over may not have really done the correct thing, just the "Oh FFS, this car is being stupid, no, THAT lane!" thing.
And FSD doesn't get that kind of feedback anyway. It's only with a human in the loop that you can learn from how humans handle stuff.
Great, thanks for that info. I'm remembering the fatal crash of a Tesla on 101 where the family said the guy driving had complained about the site of the accident before. It's interesting to know that there's at least a mental list of places like this even now. Disengagements should at least prompt a review of that interaction to try and understand why the human didn't like the driving. Though at Tesla's scale that has already become something that has to be automated itself.
RE: determining what humans did was right to take over
It's a QA department. If there is a failure hot spot, then take a bunch of known "good" QA drivers through that area. Assign strong weight to their performance/route/etc.
It's interesting reading through all this, I can see a review procedure checklist:
- show me how you take hotspot information into account
- show me how your QA department helps direct the software
- show me how your software handles the following known scenarios (kids, deer, trains, deer weather)
- show me how you communicate uncertainty and requests for help from the driver
- show me if there is plans for a central monitoring/manual takeover service
- show me how it handles construction
Also, construction absolutely needs to evolve convergently with self driving. Cones are... ok, but some of those people leaning on shovels need to update systems with information on what is being worked on and what is cordoned off.
> Also, construction absolutely needs to evolve convergently with self driving. Cones are... ok, but some of those people leaning on shovels need to update systems with information on what is being worked on and what is cordoned off.
No. If the car cannot handle random obstructions and diversions without external data, it cannot be allowed on the road.
Construction is often enough planned ahead of time, but crashes happen, will continue to happen, and if a SDC can't handle being routed around a crash scene without someone having updated some cloud somewhere, it shouldn't be allowed to drive.
First responders need to deal with the accident, not be focused on uploading details of the routing around the crash before they can trust other cars to not blindly drive into the crash scene because it was stationary and not on a map.
And if you can handle that on-car, which I consider a hard requirement, then why not simply use that logic for all the cases involving detours and lane closures?
Then broadcast an alert on a status channel that the AI can take into account. They're going to do it anyway for traffic and other hazards.
Doing so ahead of time for planned construction is not a big ask.
"And if you can handle that on-car, which I consider a hard requirement, then why not simply use that logic for all the cases involving detours and lane closures?"
You're making the same mistake Musk made when insisting that the car be able to navigate regardless of location or connectivity. Ignoring the ability of networking/radio broadcast/internet databases to provide vastly more deep information pools is a big mistake.
My thoughts exactly. I've made those mistakes myself, many times.
I guess I sort of assumed that Tesla would do three things:
- Record the IRL decisions of 100k drivers.
- Running FSD in the background, compare FSD decisions with those IRL decisions. Forward all deltas to the mothership for further analysis.
- Some kind of boid, herd behavior. If all the other cars drive around the monorail column, or going one direction on a one way roadway, to follow suit.
To your point, there should probably also be some sort of geolocated decision memory. eg When at this intersection, remember that X times we ultimately did this action.
Pretty simple that an official Tesla employee could confirm that at this location, there's a giant concrete pillar here. Or worst case, at this location deactivate FSD and require human decision until you're outside of this geofenced area. They could do that with a simple OTA update. GM/Ford have taken this approach.
The FSD could then infer that no other cars passed thru meridians, planters, and columns. It could infer that only busses travel in restricted lanes. It could infer that all traffic on a one-way road goes one way.
And if FSD remembered its own decision every prior time, it could reconfirm its current decision.
In other words, it could learn from every other vehicle and its own history.
I can see big issues in biasing a decision making algorithm too much towards average driver behaviour under past road conditions though, particularly if a lot of its existing issues are not handling novelty at all well...
In one of the technical videos a Tesla engineer presented, I remember the (paraphrasing) quote that the car has no memory and sees the same intersection for the first time, every time. It sounds intentional, as part of their strategy to not rely upon maps etc.
Humans have an ... ok ... driving algorithm for unfamiliar roads. It's improved a lot with maps/directions software, but it still sucks, especially the more dense you get.
Routes people drive frequently are much more optimized: knowledge of specific road conditions like potholes, undulations, sight lines, etc.
I would like to have centrally curated AI programs for routes rather than a solve-everything adhoc program like Tesla is doing.
However, the adhoc/memoryless model will still work ok on highway miles I would guess.
What I really want is extremely safe highway driving more than automated a trip to Taco Bell.
I personally think Tesla is doing ...ok. The beta 9 is marginally better than the beta 8 from the youtubes I've seen. Neither are ready for primetime, but both are impressive technical demonstrations.
If they did a full-from-scratch about three or four years ago then this is frankly pretty amazing.
Of course with Tesla you have the fanboys (he is the technogod of the future!) and the rabid haters (someone equated him with Donald Trump, please).
A basic uncertainty lookup map would probably be a good thing. How many tesla drivers took control in this area/section? What is the reported certainties by the software for this area/section?
It's all a black box, google's geofencing, Tesla, once-upon-a-time Uber, GM supercruise, etc.
A twitter account listing failures is meaningless without the grand scheme of statistics and success rates. A Twitter account of human failures would be even scarier.
That's kind of the "selling point" of running this experiment on non-consenting public, that it will learn over time and something working will come out of that in the end.
These are almost the exact reason I'm looking forward to test driving an ID.4. I just don't care about the acceleration enough to have that be a plus. I hope you like your ID.4, it really seems like a 'regular car' BEV.
This is a 1st-gen chip with H.264 and VP9 support. Apparently, they've already got early versions of the next generation with AV1 which will hopefully mean wider adoption of AV1 will be coming in the next few years.
AV1 is already here. Lots of YouTube videos have AV1 encodes and they'll playback in software on your desktop as long as your browser supports AV1. You can see which codec you're using by right clicking on the video and selecting the "Stats for nerds" menu item.
FYI, looks like the link on the FAQs for Windows is messed up in the second question.
Is CoScreen available on macOS, Windows, Linux, Mobile, or Web?
macOS: yes (download - requires macOS Mojave 10.14.6 and above)
Windows: yes(download - requires macOS Mojave 10.14.6 and above)
Linux, Mobile, Web: coming soon, sign up for the wait list
Fixed... This is what it should have said:
- macOS: yes (requires macOS Mojave 10.14.6 and above)
- Windows: yes (requires Windows 10 and above)
- Linux, Mobile, Web: coming soon, sign up for the wait list
I interviewed a few months ago for a HW position and they mentioned remote was fine, even post-COVID. I'd imagine SW would be more flexible but don't specifically know.