I own a 2013 XC60 and as a programmer, I'm amazed at how well their current automation works in practice. The car reads road signs to show me the current speed limit in the dash (actual OCR, not a GPS database). It works amazingly well, except when it's very dark in the winter. Adaptive cruise control flawlessly follows cars based on radar, even in somewhat dramatic full stops, e.g. when there's suddenly heavy traffic ahead. I always have my foot on the break pedal just in case, but I'm yet to have to intervene.
My gut feeling tells me that there should be a mandate that requires all safety software to be open-source and liberally licensed. Safety should be very much part of the commons and we all lose out when this type of software isn't shared freely.
Building high quality safety software and sharing it with all car manufacturers wins way more brownie points in my book than using it as a selling point.
I agree 100% however I think this is probably unlikely to happen. Being safer than another brand makes for great marketing copy and Volvo has been able to rely on that for a very long time. Openly sharing their edge with their competitors doesn't sound like "smart" business. Having said that, they did share their seatbelt design once so maybe....
Someone in this thread mentioned that Volvo let anyone use their patented seat belt design. So there's at least a tiny speck of light in the end of the tunnel :)
Why would it bother OCR'ing road signs? That seems much more error prone and needlessly complex. That data is readily available from a data sources based on your GPS location, I know TomToms and other navigation devices have it.
Maybe because their goal is to steadily build a car system that can read the environment and adapt speed and direction based on the actual situation? a map is not the territory.
The latest models also detect pedestrians, bikes, and even animals -- and the car sets off an alarm and brakes so that we don't run over them. High beams are politely turned off when meeting another car, as well as when passing near pedestrians. The software also performs lane detection, shows you where you're heading when moving in reverse, signals to the driver when another vehicle is close by to the left or right, even if you can't seen them in the rearview mirrors.
Volvo has a goal of preventing or mitigating accidents to such an extent that by 2020 nobody will die in a Volvo when there is a car accident. Their commitment to the safety of car passengers was such that they let everyone freely use their patented seat-belt system when they improved it. They've been running a team in Sweden for decades that travels to every single car accident involving a Volvo, so that they can understand the causes and consequences of the accident, sometimes even repeat the event in lab conditions -- and they can improve the cars accordingly.
Yes, we own a (new) Volvo -- we picked it up at the factory in Sweden, drove around Europe for a few weeks, and returned it for them to ship it over to our US address.
I wonder how lane detection works. I've had it warn me in a situation where there was no markings on the asphalt due to recent construction work, so it seems to be more clever than just detecting when you drive over markings.
I wonder what happens to speed limits when self driving cars are ubiquitous... I mean, once there are no more revenue from violations, and cars know ideal speeds far better than legislatures, should we just let the computers figure out what speed they want to go?
Currently, 'recuding accidents' are the primary driver for speed regulations (at least in Europe)
There are a lot of other variables to optimize for, both on individual basis (fuel, travel time, battery life, ...) and collective basis (congestion locallity, total time lost in congestion, emissions, ...)
I guess legislatures - over time - will try to optimize some variables for 'the greater good' instead of minimizing your travel time :-)
For certain safety classes, to reach the highest qualification, the vehicle-under-test must pass some pedestrian accident scenarios, so it's not that the manufacturers do this out of good will. The S-Class used to flatten pedestrians quite liberally, which caused it to fail certain tests, which in turn caused them to implement this: http://www.daimler.com/dccom/0-5-1210222-1-1210363-1-0-0-121...
At least in Finland, probably in Sweden too speed limits change often (every 6 months) due to winter. In my experience the GPS databases are extremely often not in sync, and you can't predict which speed limits change and when. In addition, there are electronic speed limits that change depending on the weather in real-time. OCR seems like a better option for these conditions.
And they can change faster than that. Yesterday they put up a 50km/h sign for half a kilometer at my drive to work due to some roadwork, today it was gone again.
I know in Seattle there are variable speed limit zones, where the signs are digital displays that can be changed at anytime to reflect appropriate speed limits for the current conditions.
I think OCR may have reliability issues, but most road signage has consistent properties (size, height, font, etc.) that could be easily optimized for.
It also covers the (admittedly limited) situations where no GPS is available (tunnels, skyscrapers). If there are temporary changes in speed limits due to construction or other events, OCR could immediately identify those with no need to download new information.
Speed limits might change without prier notice due to a car accident or road works ahead. Such works might be initiated by authorities responding to an accident or similar without planning or access to update a central database.
Norway :) The road sign reading works well anywhere. Adaptive cruise control works best on highways - it's cruise control, after all. When doing a 90 degree turn, for example, it doesn't "know" what's happening. It works well for slow traffic though, and can do full stops and automatically start driving again when traffic is really slow.
I wonder what the first country that will allow completely automated cars with no human driver is? As a blind person I wonder if that would be enough to have me consider migrating from the U.S. depending on the country.
You should seriously consider approaching car companies. There will be some (understandable) public resistance to automatic cars, but you could make an excellent poster boy for their benefits.
That will be interesting since Singapore hates car use. That shows in their incredibly high tax on cars. A typical car costs more than $100k. Since automated cars can reduce city cruising for parking space and increase car utilization they may actually welcome the automated cars and perhaps ban manual cars altogether.
>* Licensing for a car is time-limited. I think this is about 10 years or so.
In US it is 4 years. I was surprised to see that low time frame as in India it is at least 20 years. And, in US, you are also required to renew your registration plates every year
What I mean is "you're not allowed to register a car again after that point". Most people sell it across the Strait in Malaysia. This distinguishes them from the US and India, where you can drive a car until it doesn't work anymore.
No, Singapore does allow you to register the car again -- problem is, this requires paying for a new "Certificate of Entitlement", which are sold at auction for >US$70k each. So most people switch to a new car instead.
Has anything been said about an open protocol for inter-car communication? What about sharing point cloud data?
I have a bad feeling that Google is going to keep everything sealed in Google Maps. The industry would benefit from sharing all of their data with OpenStreetMap.
And yes the safety applications like emergency brake, intersection collision warning, or curve speed warning etc., have high priority but other Applications like mapping are also possible. I think current mapping applications use road side infrastructures (also through DSRC) but I guess you could share traffic information (Waze?) using inter-vehicle communication.
> The industry would benefit from sharing all of their data with OpenStreetMap.
What is the barrier to entry for an open platform for people to submit this same data to OpenStreetMap. Precise Positioning + GoPros + data processing on the backend.
Americans are always awed/terrified by the Magic Roundabout, but it's actually very practical (turning right is much easier than on a single big roundabout, and the whole thing moves better), and straightforward to use even for newcomers (well, assuming you've used roundabouts at all - I understand America doesn't have many of them?)
Watching the video it struck me that the lights embedded in the barriers between lanes are a really good idea. It provides illumination of the road and the barrier without large poles lining the road which block the view and are expensive.
In Sweden the national traffic authority is responsible for steadily improving the road conditions and environment in order to make road traffic safer (for everyone). What you noticed is one example of their many improvements.
Google currently has a req out for a electromechanical engineer to work on a new sensor system. I'm betting they're building a new sensor for their vehicle - rotating, but perhaps not LIDAR.
Cracking the heavy rain/snow problem would require more than just a new design though - you'd need a totally different sensor modality, a mix of sensors that can extract enough data, or a system that's far more robust to noise.
I realize the costs would be high, but would there be a way to embed some information in/near the roadway that would assist the onboard systems? I think there might be some advantages in making the road smarter in addition to the car.
There are advantages, but this poses a high bar for adoption. The current approach of keeping the entire suite of sensors on the car means that infrastructure won't need immediate upgrades to support autonomous vehicles.
I have no doubt that smarter roads are in our future, but they pose too high of a price for wide spread adoption of autonomous vehicles at the outset.
I think this is ultimately the best way forward, but I am excited to see other tech alternatives develop, too.
I really don't think it would be that expensive, at least along Interstates and major highways, to embed a trace wire to help guide self driving cars. (At least, during regular maintenance and new construction- tearing up roads to add it would obviously be expensive).
Even if you use a sensor fabric on the road way, as well as high precision/precise positioning GPS (sub-centimeter resolution) for lane keeping, you will still need a sensor that can build the environment around the car in a model software can process. Some sort of lidar/radar combination I believe.
the original DARPA Grand Challenge cars were using range of sensors, incl. ultrasound, stereo-cameras setup, and mm-wavelength radar by the team who had the money for it. LIDAR is just easiest to process (to get 3D scene representation of surroundings) and gets the highest bang/buck ratio in its range of applicability. You need to complement it with other sensors to get wider range of conditions covered.
There is nothing else that gets the accuracy Google needs. Sensors are currently unsolved problem. Multi wavelength radars don't yet have the accuracy and they don't work well with nonmetallic objects (There is limit of how much radar can use power (safely) and how expensive it can be. What works for F-22 don't work for Google)
Google is not planning to monetize this technology anytime soon, despite the hype.
The difference between crude human/animal intelligence and top notch AI-research is still huge. If people would need the accuracy that Google's car needs to move reliably and do split second decisions, we could never leave our house. We operate using just two cameras and accelerometers. The clear picture and spatial recognition is done using top notch heuristics in the unconscious. With self driving car it's the opposite. They need millions of very accurate distance measurements per second to drive. Driving like Google car does with cameras only is not happening yet.
I am with you, except for your last sentence, which is incorrect.
Mercedes Benz from Germany is doing active research in dynamic computer vision of driverless cars since the 1980s.
"1758 km trip in the fall of 1995 from Munich in Bavaria to Odense in Denmark to a project meeting and back. Both longitudinal and lateral guidance were performed autonomously by vision. On highways, the robot achieved speeds exceeding 175 km/h" ... "This is particularly impressive considering that the system used black-and-white video-cameras"
"In August 2013, Daimler R&D with Karlsruhe Institute of Technology/FZI, made a Mercedes-Benz S-class vehicle with close-to-production stereo cameras and radars drive completely autonomously for about 100 km from Mannheim to Pforzheim, Germany, following the historic Bertha Benz Memorial Route."
Right, my camera takes more than a million measurements in 1/100th of a second and has a spatial resolution comparable to that or better depending on the distance.
'A million measurements' sounds really impressive but it does not have much to do with anything. What's a measurement? A single distance measurement in front of the car? Ok, at what opening angle, how many returns, how many pulses / second and so on.
As it stands that's just a 'big number' but those are not impressive at all without context.
>Right, my camera takes more than a million measurements in 1/100th of a second and has a spatial resolution comparable to that or better depending on the distance.
now you put second camera near-by and run stereo analysis algorithm to build 3D scene. 10+ years ago (DARPA Grand Challenge - where roots of Google self-driving car architecture comes from) with 1M cameras and the available hardware you'd get lucky to get 1 scene/sec and very crude one at that as 1M is much lower resolution than our eyes, and resolution is the key to stereovision. With LIDAR you just get 3D point for each measurement, no processing (beside regular filtering and coordinate transformation)
I wonder (haven't touched it myself for years nor checked the literature) what stereoprocessing one gets today on 10M-20M cameras on Intel CPUs of today plus GPU. It should be pretty close to what our eyes do, and what is most important - using several 20M cameras you can probably do _better_ than our eyes.
You cannot do better than our eyes. The dynamic range of eyes plus the bit depth are unparalleled in any camera. We're also backed by a very strong pattern matching algorithm.
That said, stereo runs pretty damn fast these days. On ASICs. TYZX, who was bought by Intel, sold a stereo camera about 3 years ago that ran ~52 fps with full point cloud returns. I think those were running 2+ Mpx.
>You cannot do better than our eyes. The dynamic range of eyes plus the bit depth are unparalleled in any camera.
this is one of the reasons why i said about several cameras - each camera, pair of them, can cover different [overlapping] subranges of light sensitivity and each do it better than eyes in each respective subrange, and thus the integrated image may be better than eyes'
A car that only reliably self-drives in good weather conditions is still (a) quite a hard problem, (b) almost certainly a commercially viable product on its own (c) a pretty good start on a car that reliably self-drives in any weather.
As someone who is blind I'd buy a car if it could drive me where I need to go 75% of the time. This assumes that it weather forcasts would be accurate enough to let me know the car could get me to the grocery store and dry cleaner any time during the current day rather than leaving me stranded at my destination unable to get back home.
(1) Delivery-bot. A car that drives itself to drop off a package and only delivers on non-rainy days. (if you need delivery on a rainy day, you pay extra for a person-driven delivery service. Otherwise the package waits at the warehouse)
(2) Transport option for people who can't or shouldn't drive themselves - too old, too blind, too young, or physically impaired. The self-driving car takes them where they need to go when weather permits, otherwise they have to call a cab or van service as a backup option.
(Much of California only has a couple weeks of rain per year.)
For Google, sensor technology is pretty much an implementation detail at this point. I imagine their interest is much more in vetting out the basic concept and understanding where the gaps are than it is on pushing toward getting a near- to mid-term product on the roads.
On the other hand, I imagine that auto manufacturers are much more interested in getting to viable product--whether it's improved assistive driving features (collision avoidance, speed matching, etc.) or, in the somewhat longer term, autonomy for some limited range of conditions. Hence, for example, Volvo's involvement of government as well.
I can't wait for self driving tech to be put into use - transportation will get cheaper and faster (no more 8-10 hours driving limit bs), not to mention that one or two persons could drive a dozen trucks filled with cargo (though a lot of truckers will hate that)...
> transportation will get cheaper and faster (no more 8-10 hours driving limit bs)
Why do you say that? Somebody is still going to need to be awake at the controls, at least for the first few decades of self driving tech. There's a reason pilots on planes don't go to sleep when the auto-pilot kicks on.
It's not a good comparison: flying vs driving have totally different response times(minutes vs seconds). One could even say people can't respond in a good way in seconds , unless they keep full attention at all times, which beats the purpose of self driving cars.
We've basically proven that short of a software glitch, the car will always react faster and more appropriately than a human since a computer can process more information quicker than a human.
The driver won't have to be vigilant or even awake for the whole time, only in emergencies. I'm assuming the car would stop or slow down automatically in case of any problem and notify the driver.
I don't think that will be a hard requirement. A self-driving system will have to be capable of bringing a truck, or a convoy of trucks, to a safe stop in case of an incapacitated "driver" ("operator?"). Bringing an airplane promptly to a safe stop is usually not possible.
> not to mention that one or two persons could drive a dozen trucks filled with cargo
Maybe autonomous vehicles will make smaller trucks cost-effective, and big trucks will then be charged their true cost in road damage and traffic risks.
That would be a much more visible change in the traffic landscape.
This won't work in our country. Drivers don't follow traffic regulation, the roads can't even be called roads. Everything is just wrong. Even a very advance AI can't handle our roads in Philippines.
Heh, that's true for most countries. But I'm pretty sure the tech can already handle motorways - the cars could self-drive there, then let the driver take over in cities...
“The test cars are now able to handle lane following, speed adaption, and merging traffic all by themselves,”
This is an important step, although I must say they appear to be far behind Google. My money is on Google getting to an acceptable deployment phase far earlier.
Given the driving conditions in Sweden and the years of experience Volvo has as a car company, I would bet they will be able to deploy and all weather version before Google.
This effort does not appear to be as ambitious as the Google approach. The cars drive a specific test route around "50 kilometers of selected roads in and around Gothenburg." That's quite a different thing than a system where you key in an arbitrary destination and the car drives you there.
I think they are probably trying to solve the same problem from two different directions.
Google appears to be making an advanced and expensive solution to "full" autonomous driving. The limitation will not be in where it can go, but it is initially limited in weather/environmental conditions. The target audience is likely not production cars in the short term, but a system like theirs can for example make their street view photography cars autonomous: they can work in summer, stop if it rains, and it doesn't matter too much if they cost twice as much as a regular car. It will of course be a regulation nightmare to allow the driverless cars out in the street though.
Volvo on the other hand will probably try to make a simpler and much less expensive solution that will have to work in rain/darkness/snow, and not add a significant amount to the cost of the car. Clearly that means having other restrictions, presumably in the overall intelligence or in which roads it works on. Volvo's system is likely intended for making day-to-day travel more secure at the lowest cost possible, not for creating completely autonomous navigation and problem solving (e.g. 4 way stop turn-taking). It must work in almost any environmental conditions, but it will probably require a driver at all times, and will almost certainly not work everywhere.
Never thought of this before but you might be spot on. Maybe Google has no intention of making autonomous cars for normal use but exclusively for Street View.
Street View is hopelessly out of date in many places (4 years in some parts of Sydney for example) and it has to be super expensive to keep thousands of people continuously driving around the world. And Street View will likely be a vital and profitable data source if VR/AR (in all its forms) is ever used en masse.
I wonder if they could cut costs simply by making an agreement with a taxi company to put the camera across a few cars. Then they would need less drivers to go out and cover the gaps taxi's don't cover over the course of a year or so. It would work well for urban cities one would think. Uber should have data to give a rough idea of coverage.
>Maybe Google has no intention of making autonomous cars for normal use but exclusively for Street View.
Sounds like too much expense for just that. If the problem of street view was it's cost you could just pay people to add a StreetView system to the roof of their cars and get the data much cheaper than having your own car. They may very well want to start by using Street View to validate the technology but the end-game must be larger.
If the problem of street view was it's cost you could just pay people to add a StreetView system to the roof of their cars and get the data much cheaper than having your own car.
I think you're underestimating the number of people you'd have to pay to get full coverage of cities. Most people do not drive around on every street in a systematic way; they have a small number of destinations and they usually take the same routes between them every time.
You'd have to optimize for that. You can add incentives for people to mix up their routes, and you can use a much smaller number of dedicated cars to cover whatever is left. My point was that if the problem you're trying to solve is "StreetView is too expensive to operate" you're not going to go and create self driving cars, you'd do something simpler.
That doesn't mean that once you do have self-driving cars you wouldn't use them for that. Or even that StreetView wouldn't be the first thing you'd use them for, since that's a good way to get more training data for your algorithms. But to suggest they'd go to all the expense of creating self-driving cars to then use them just as a cost saving measure for StreetView sounds strange.
Not to kick the guy that's down, but "I wish we had more recent streetview, OH I KNOW lets build a totally autonomous car" sounds to me somewhat like "people want their postal mail online. OH I KNOW lets send somebody to every house and pick up the mail the mailman left."
Not unlike the experience for drivers on Lyft and UberX it would be trivial for Google to create a rig and an app. You set the magnetic camera rig on your roof and sign in. It shows you available routes and how much they'll pay for them. The offer increases every so often if nobody selects the route. Done.
Well, even if improving Street View is a shorter term goal, it seems pretty reasonable to assume that Google has at least some intention of using that same core technology to pioneer, or at the very least compete in, the very lucrative market of autonomous consumer vehicles down the road.
> Vissa korsningar har en stoppskylt vid varje väg, ett så kallat flervägsstopp. Det innebär att alla som kommer till korsningen måste stanna, oavsett vilken väg man kommer på. Tanken är att man vid flervägsstopp ska ta ögonkontakt med medtrafikanter för att på så sätt komma överens om vem som ska köra först.
Or my translation "Certain intersections have a stop sign at every way, a so-called multi-way stop. This means that everyone who comes to the intersection must stop, despite which way they come from. The thought is that people at the multi-way stop will make eye contact with their fellow drivers in order to come to an agreement on who shall drive first.
(Apparently the actual law says "ömsesidig hänsyn" - "mutual consideration.")
That said, they aren't anywhere near as common as the US.
Do you actually have them in Sweden? I was taught similar rules back when getting a driving licence, but it doesn't neccessarily mean that such intersections exist - I was told about intersections that were like that in 1960ies, but those spots have traffic lights now.
I biked through one yesterday, in Trollhättan. If you do an image search for "flervägsstopp" you'll find several images, including this one which is specifically a four-way stop: http://www.trafikmagasinet.nu/art090503.htm .
Hej! Another Trollhättebo! Yeah, I actually passed a couple on my ride, though I only mentioned one. As an American my baseline is the number of 4-way stops in the US. I've only lived in Gothenburg and here, which gives a poor sampling bias.
Oh? At least in Germany it's that you have to yield to a driver to the right (ignoring priority roads) but if you have an intersection of non-priority roads and a car from each direction someone has to make a decision to go first, otherwise it's a deadlock.
Europe doesn't generally use 4-way stop signs (http://en.wikipedia.org/wiki/All-way_stop) in the traffic regulations; intersections tend to be either regulated, or with one road designated as the priority, or as roundabouts, not like this.
I've driven a bit around Europe, but I have never seen a single such intersection in my life - if they exist, they must be rare.
Of course, "if you have an intersection of non-priority roads and a car from each direction" then it's the same, but in practice it seems that 'they' make sure that such intersections are only in extremely low-traffic places where you'll very rarely see another car at the same time.
In Sweden as soon as you're in a residential area or out in the countryside, unmarked priority-to-the-right crossings are the norm. And if cars from all directions in a crossing arrive at once, you end up with an ambiguous 4-way stop situation.
In creating an autonomous car, successfully handling those rare edge cases are going to be the thing that differentiates success from failure. Roundabouts and Michigan left turns are rare in most places in the US, but an autonomous car would still have to be able to handle them.
In Europe you can have ambiguous situations with the priority to the right, where you are supposed to be courteous and wave the driver if both (or more) have the right of way.
But those 50 kms are, according to the article, 'main commuter arteries' which to me makes it seem that they selected a small set of roads not to make the driving circumstances easier but to be able to more easily monitor the cars that drive there and to be able to reproduce similar situations more easily, to test improvements to certain algorithms or hardware.
So basically they're testing under the 'worst' circumstances, making it likely that it will perform well on 'easy' routes as well. This approach makes sense - what good is it to clock 1000's of kms of test drives on straight roads with no traffic and in broad daylight?
It seems unlikely to me that the supported routes have train tracks, construction cones, bike lanes, etc. -- the kinds of difficult problems of urban driving that Google has put a lot of resources into solving.
Driving in Göteborg is by no means a piece of cake. Unlike most American cities, there are a lot of pedestrians and bicycles, plus trams, and by necessity they sometimes take priority over car traffic.
However, one major traffic simplification in Europe is that there is no such insanity as "go ahead and turn right even when the light is red -- and run over crossing pedestrians unless they back off."
Might be true that the supported route are favourable, Gothenburg however do operate streetcars/trams together with regular buses as public transportation. These tracks will be hard to avoid in the city.
This quote from a long New York Times article on Google is highly relevant:
"The self-driving algorithms do not work because there has been some breakthrough in artificial intelligence; they run on maps. Every road that Google’s robo-cars drive on was first surveyed by a human-driven pilot car outfitted with sensors accurate enough to measure the thickness of the painted lines in the middle of the road. Every detail of the road has been mapped beforehand. According to Peter Norvig, Google’s head of research, it’s a hard problem for computer vision and artificial intelligence to pick a traffic light out of a scene and determine if it is red, yellow or green. But it is trivially easy to recognize the color of a traffic light that you already know is there."
So Google self driving cars run in a virtual environment prebuilt for them by their street view cars being manually driven, with sensor input to fill in the realtime data variables (people, other cars, traffic light state). Brilliant!
I don't see "Google needs to pre-map the route" as a hurdle at all. Even if Google had never been in my area, I would not mind one bit having to do my daily commute manually for a week while the car builds more and more detailed maps of the route, if it saved me the hassle of the commute for months thereafter.
Plus, every time the automatic car goes over the road, it's presumably checking its prior maps and flagging any anomalies from its preset mission. It still needs to recognize a new stop sign, but if a stop sign disappears, it will stop anyway while waiting for an answer on "was it was taken out on purpose, or did some kids steal it?".
This is the basis of the SLAM algorithms (SLAM = simultaneous localization and mapping). As you say, if we all had these cars, we'd quickly know about changes. Of course, the hard problem is finding the new lights, but if this was to become mainstream I think a lot of things would change. For example, if a town decides to install a red light, they don't just install one, they register it in a national database.
If the car could map the route automatically, then it wouldn't need the pre-map in the first place. The problem is that creating the map routes currently requires expert human attention. Driving your daily commute manually for a week won't do any good.
Yes, it is clear that the Google system needs quite a lot of data about the roads it is driving on. But given those roads with a high amount of detail, the Google system can (from my understanding) support keying in an arbitrary destination.
I may be reading too much into this, but when I read "test route" in the article, it gave me the impression that the Volvo approach is sort of like a train without the tracks -- only able to follow a pre-programmed route from point A to point B.
No, they picked a test route to be decently representative and allow them to setup their tests correctly. It is not an A to B thing. role_v's comment is a pretty good explanation https://news.ycombinator.com/item?id=7698495
Our aim is for the car to be able to handle all possible traffic scenarios by itself, including leaving the traffic flow and finding a safe ‘harbour’ if the driver for any reason is unable to regain control,” explains Erik Coelingh, Technical Specialist at Volvo Car Group.
In addition to what other’s have said it stuck me as very important that Volvo is having customers, not employees drive these cars. I imagine they react very differently, and will be better at simulating what the real world is like.
lane following, speed adaption, and merging traffic all by themselves
Well, that's better than most human Volvo drivers can do, by my experience. Whenever I see a car make a bone-headed move in traffic, chances are it's a Volvo.
Very interested in this space and how it will end up. We have a big emotional attachment to cars evidenced by the fact that the annual cost of taking taxis everywhere is probably cheaper than the annual costs of owning a car if you live in a city. NYC habitants have probably made this transition already.
SO if we end up with self driving taxis then this form of transport will become ridiculously cheap as the driver's salary is now eliminated. The rationale to own a car will become harder to justify.
Also I read recently that our actual usage of cars is very low over a year. This means huge inefficiency. There is plenty of spare capacity per car to do alot more journeys. If we all start taking self driving taxis that run 24x7 in a few years time where does that leave the car companies. One assumes with far fewer car sales. Are they shooting themselves in the foot?
> We have a big emotional attachment to cars evidenced by
> the fact that the annual cost of taking taxis everywhere
> is probably cheaper than the annual costs of owning a
> car if you live in a city
I don't think that's true of most cities. Definitely not true of London.
I know you meant this as a joke.. however, Volvo cars do watch for pedestrians and automatically stop for them if needs be. In the worst case, there is an external airbag that will deploy in front of the windshield to avoid the worst types of injuries based on decades of analysis of car accidents with pedestrians (in Sweden).
"tested live on public roads in Sweden" All I saw was a guy driving that car. Didn't actually get to see the car driving on its own.~ Can't believe it till I see it.
I was quite a fan of the manual transmission, but I got an automatic car for the first time now (Mercedes A-class), and it's just so good at shifting that I never even consider it any more. I think of the transmission as another part of the driving experience to automate away now, like turning lights and parking brakes on and off.
It still has paddles, so you can manually shift. Sure, if you don't like it, it takes away the element of fun, but a few minutes of fun isn't worth the hours of manual-shifting drudgery during rush-hour traffic for me.
But I like manual transmissions as they force me to pay attention to the act of driving quite frequently. Without that constant reminder, I have a tendency to zone out and go on autopilot myself, which is of course dangerous.
Manual transmissions are my hack to attack this issue - but I’d love to have a self driving car where I was reasonably confident that my zoning out would not be a safety hazard. (As a manual transmission isn’t a great hack, and I live in an urban area, I mainly take the bus, which solves the "zoned out problem" completely.)
I suspect that "zoning out" will be a big question as these technologies are brought to market albeit in a different way than you describe. Many of the things being worked on will likely be useful and potentially available as assistive systems well before they could be delivered as part of a complete autonomous system. However, there will almost certainly come a point when you can provide too much of certain kinds of assistance unless you're prepared to actually turn over full control to the system.
You can't have a system that doesn't need a person to do anything 99% of the time but requires that same person to take over control at a moment's notice to deal with some situation that the computer doesn't know how to handle or, indeed, didn't even recognize.
With draconian CAFE requirements coming (49 mpg average [1][2] for vehicles the size of a Honda Fit), manufacturers are really ramping up their efforts to be compliant. Automatic transmissions now can deliver 1-2 better mpg than a manual, plus sometimes faster shift times. You're also seeing a lot more CVT transmissions vs. conventional planetary gear ones these days because they typically deliver 2+ mpg better economy.
[2] I'm not saying the days of the V8 are over (like in Mad Max 2 -- "The last of the V8 Interceptors. A piece of history.") .. but if you want one, you might want to start thinking about buying one to keep.
That's the first time I've heard of automatic transmissions giving better mileage than manuals. I always thought the drive train loses on an automatic transmission (mainly the torque converter) was the issue? Has that changed?
Depends on the design, YMMV, etc. etc. But Consumer Reports found that the Ford Fiesta and Mazda 3 returned 1 mpg better when specced with an automatic.
How is it possibly better than automatic? I've driven a manual Scania, it's a serious pain in the ass - automatic transmissions are a godsend for trucks...
It really depends on the person and the vehicle. I learned to drive stick some years after I started driving, and I just adore the experience to the point where I will not buy an automatic car. Maybe in the future. But not now.
That said, trucks and passenger vehicles with rubbery shifter bushings might be a different story. There's a lot of shifter and clutch variation out there.