I saw the Tesla Robotaxi:
- Drive into oncoming traffic, getting honked at in the process.
- Signal a turn and then go straight at a stop sign with turn signal on.
- Park in a fire lane to drop off the passenger.
And that was in a single 22 minute ride. Not great performance at all.
I never did say it wouldn’t ever be possible. Just that it will take a long time to reach par with humans. Driving is culturally specific, even. The way rules are followed and practiced is often regionally different. Theres more than just the mechanical act itself.
The ethics of putting automation in control of potentially life threatening machines is also relevant. With humans we can attribute cause and attempted improvement, with automation its different.
I just don’t see a need for this at all. I think investing in public transportation more than reproduces all the benefits of automated cars without nearly as many of the dangers and risks.
This is one of the problems driving automation solves trivially when applied at scale. Machines will follow the same rules regardless of where they are which is better for everyone
You’d shit yourself if you knew how many life threatening machines are already controlled by computers far simpler than anything in a self driving car. Industrially, we have learned the lesson that computers, even ones running on extremely simple logic, just completely outclass humans on safety because they do the same thing every time. There are giant chemical manufacturing facilities that are run by a couple guys in a control room that watch a screen because 99% of it is already automated. I’m talking thousands of gallons an hour of hazardous, poisonous, flammable materials running through a system run on 20 year old computers. Water chemical additions at your local water treatment plant that could kill thousands of people if done wrong, all controlled by machines because we know they’re more reliable than humans
A machine can’t drink a handle of vodka and get behind the wheel, nor can it drive home sobbing after a rough breakup and be unable to process information properly. You can also update all of them all at once instead of dealing with PSA canpaigns telling people not to do something that got someone killed. Self driving car makes a mistake? You don’t have to guess what was going through its head, it has a log. Figure out how to fix it? Guess what, they’re all fixed with the same software update. If a human makes that mistake, thousands of people will keep making that same mistake until cars or roads are redesigned and those changes have a way to filter through all of society.
This is a valid point, but this doesn’t have to be either/or. Cars have a great utility even in a system with public transit. People and freight have to get from the rail station or port to wherever they need to go somehow, even in a utopia with a perfect public transit system. We can do both, we’re just choosing not to in America, and it’s not like self driving cars are intrinsically opposed to public transit just by existing.
What are you anticipating for the automated driving adoption rate? I’m expecting extremely low as most people cannot afford new cars. We are talking probably decades before there are enough automated driving cars to fundamentally alter traffic in such a way as to entirely eliminate human driving culture.
In response to the “humans are fallible” bit ill remark again that algorithms are very fallible. Statistically, even. And while lots of automated algorithms are controlling life and death machines, try justifying that to someone who’s entire family is killed by an AI. How do they even receive compensation for that? Who is at fault? A family died. With human drivers we can ascribe fault very easily. With automated algorithms fault is less easily ascribed and the public writ large is going to have a much harder time accepting that.
Also, with natural gas and other systems there are far fewer variables than a busy freeway. There’s a reason why it hasn’t happened until recently. Hundreds of humans all in control of large vehicles moving in a long line at speed is a very complicated environment with many factors to consider. How accurately will algorithms be able to infer driving intent based on subtle movement of vehicles in front of and behind it? How accurate is the situational awareness of an algorithm, especially when combined road factors are involved?
Its just not as simple as its being made out to be. This isnt a chess problem, its not a question of controlling train cars on set tracks with fixed timetables and universal controllers. The way cars exist presently is very, very open ended. I agree that if 80+% of road vehicles were automated it would have such an impact on road culture as to standardize certain behaviors. But we are very, very far away from that in North America. Most of the people in my area are driving cars from the early 2010s. Its going to be at least a decade before any sizable amount of vehicles are current year models. And until then algorithms have these obstacles that cannot easily be overcome.
Its like I said earlier, the last 10% of optimization requires an exponentially larger amount of energy and development than the first 90% does. Its the same problem faced with other forms of automation. And a difference of 10% in terms of performance is… huge when it comes to road vehicles.