Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you elaborate on this? What do you know about AI research and technical limitations of various components that leads you to be skeptical of the realization of autonomous vehicles?

I'm with you on predicting that if Apple is getting into it, it will be non-autonomous, but I'm curious as to how you have reached your other incredulous conclusion.



My skepticism has to do with the following:

* People talk about how automatic cars have been extensively tested. That's not really true. There's only a handful of them on the roads and they've been tested in limited conditions, with particular limits placed on suburban driving (slow speeds, etc.). I suspect there will be entirely new classes of bugs when there are hundreds of different AIs competing on the road, especially when mixed with normal human traffic.

* LIDAR does not work in poor weather conditions, including ice on the road.

* In terms of AI, getting to the 90% point is relatively easy, but it seems like the last 10% would require something resembling true intelligence. How do you deal with a lack of markers on the road? Dangerous situations in shady areas? Sensor equipment getting damaged and feeding your car wrong information? Animals and children running out last-second into the road? Severe obstacles at a distance? Tiny, cramped residential roads with non-sidewalk pedestrian traffic? Accommodating emergency vehicles and police? We shut off our brains when driving 90% of the time, but that last 10% does require human levels of intelligence. Furthermore, unless every car on the road is automated, the AI in automatic cars will need to deal with the idiosyncrasies of other human drivers. Sudden merges. Speeding. Road rage. Unexpected emergencies. Tailgating. An AI can't send and receive feedback to other human drivers and pedestrians; it can only use its model of human behavior to predict likely outcomes. (And this is saying nothing of other AIs on the road, who might behave completely unlike human drivers.)

* People say that automated cars only need to drive better than human drivers, not perfectly. I don't know if that's true. If automated cars drove perfectly 99.9999% of the time but then crashed horribly that remaining 0.0001% — taking some poor pedestrians or bicyclists along with them — I wouldn't get into one. And I don't think they'd be street-legal. Most people don't want to think about human lives in terms of numbers; they'd rather have control over their actions and accept the inevitability of occasional accidents, rather than having a machine that's practically guaranteed to eat up human lives every so often.

* Speaking of which, how does the car decide who gets to live in a life-or-death situation? Are there situations where the car would elect to kill the driver? Which programmer gets to make that decision? I'd like to know this information before getting into my car, please. The idea of offloading split-second moral decisions to an AI seems like it should be severely legislated.

* People talk about mesh networking improving traffic and whatnot. I can't even get my USB devices to work across OSes half the time, and we're talking about sophisticated traffic control across multiple manufacturers? Especially given the quality of software that car companies tend to put out?

* Self-driving technology is extremely expensive, and most cars on the road are pretty cheap. You'd have to have some sort of insane subsidy program to get more than the 1% driving automated cars.

* People want to get in their self-driving cars drunk, but I think there will be severe legal hurdles in the way of that.

I think semi-automated and AI-augmented driving are certainly possible. Highways are easy enough to tackle. I could see some expensive cars getting that capability over the next few decades, as well as maybe cargo trucks. But endpoint to endpoint driving, where you could snooze on your way to work? I don't think so.


> If automated cars drove perfectly 99.9999% of the time but then crashed horribly that remaining 0.0001% — taking some poor pedestrians or bicyclists along with them — I wouldn't get into one

Thousands of human drivers crash horribly every day and over a million people die every year by car accidents in the World. If humans "crashed terribly" 0.0001% and a self driving car did so 0.00005%, it would already be a great improvement.

I don't have solid data about this, but I think LOTS of the current accidents are due to drivers being reckless, distracted. high or emotional. Self-driving cars would introduce some new risks, but they would take away lots of the current ones.

And for many of the situations that you list, there is a simple solution: the car should have two modes, self-driving and human-driving. Self-driving will only work when conditions are normal. Incompatible weather conditions? Broken sensor? The car parks itself, and you need to drive yourself or wait for assistance.


> And for many of the situations that you list, there is a simple solution: the car should have two modes, self-driving and human-driving. Self-driving will only work when conditions are normal. Incompatible weather conditions? Broken sensor? The car parks itself, and you need to drive yourself or wait for assistance.

That would be a good solution, but many people who talk about self-driving cars don't want that. They want to sleep in the car. They want to "drive" home drunk. In both cases, a manual override would be disastrous and probably illegal. (We can already get a DUI just for getting into a car while drunk!) They also want to be able to "fetch" their cars without a driver, which wouldn't work.


First of all, "many people" will just go along with what is offered, even if it isn't their ideal solution.

Regarding the sleeping and "fetching" your car, those would still be compatible: simply, if the conditions aren't good, the car would respectively park and wake you up, or inform you that it can't respond to your fetch request at the moment.

Regarding a drunk/high person, it would still be an improvement to the current situation: most of the times you wouldn't drive yourself.


<em>Animals and children running out last-second into the road?</em>

I feel like AI would outperform the heck out of us in that situation, because solving it correctly involves a) high situational awareness, which we suck at because we get distracted, and b) making split-second calculations, which we suck at because we panic and our reflexes aren't that fast. I mean, this is a simple situation, when it comes down to it:

Something runs out into the road.

- Can you swerve and avoid hitting it without hitting anything else or driving off the road? If yes, do. If no:

- Can you hit the brakes without endangering anyone behind you? (Humans tend to have trouble with this decision, because half the time we forget to watch the rear view mirror and now there's no time to check. An AI would ALWAYS know what was behind it.) If yes, do. If not:

Are you more worried about endangering the thing before you or the thing behind you? If it's an animal, hit the animal. If the thing in front of you is a squishy human and the thing behind you is a car protected by all the engineering that protects cars in crashes these days, hit the brakes.

That's all there is to it, 99 % of the time. Sure, this is a situation where AI might make the wrong decision in 1 % of cases, but it's also a situation where humans are wrong much more than that.

<em>How do you deal with a lack of markers on the road?</em>

This one is actually probably most easily solved by just putting markers on all roads. Sure, that would require driving down all roads in a specially-engineered expensive vehicle with a dedicated crew, so you think it sounds like unaffordable insanity until you realize google has already done it once, for the googlemaps mapping project. Just do it again, this time in a car made for spray-painting markers instead of mapping stuff.

Apart from that, the overwhelming majority of problems on the road can be solved by yielding even when you're technically in the right and just slowing the heck down, if necessary to walking speed and below. There will still be problems that only a human being can solve, but there's going to be a human being in the car, so worst case, pull over and ask them. If your human is drunk or underage, pull over, wait for the next passing car, watch how their human solves it, and do what they did. In the rare case where there's no other car likely to pass any time soon and the human in the car is incapacitated - possibly some sort of remote solution would work, where a human in some kind of remote driving center can take control of your car for a minute if necessary?

There'll still be high-speed situations where the AI will fuck and kill someone, but I do think people would come to accept self-driving AI even if it occasionally fucks up and kills people, because it'll be world-changingly convenient. And so far we've accepted a LOT of occasionally-deadly shit if it's sufficiently convenient. (Trains. planes. Nuclear reactors. Every single one of those things had waves of protests and people going "It will never work, and when it fails, people will die! Not worth it!"

And occasionally they failed, and people died, but we still keep using them.


> I feel like AI would outperform the heck out of us in that situation, because solving it correctly involves a) high situational awareness, which we suck at because we get distracted, and b) making split-second calculations, which we suck at because we panic and our reflexes aren't that fast.

My understanding is that current self-driving technology works great at short range, but is pretty incapable of dealing with obstacles at a distance. If there are things far away that are potentially dangerous — kids, a car behaving erratically, a landslide — I can slow down in case something happens. A self-driving car might have more trouble until the obstacle gets in range, by which point it might be too late.

> Are you more worried about endangering the thing before you or the thing behind you? If it's an animal, hit the animal.

What if it's somebody's pet? Sometimes putting a dent in someone's car is worth saving a beloved animal's life. What if you're about to hit someone and there's a bicyclist behind you? There will be times when the car will have to make decisions regarding which life to save, including the driver's.

> This one is actually probably most easily solved by just putting markers on all roads.

This isn't possible on all roads. Markers can be covered by snow, mud, etc. Yes, you can have RFID markers or whatever, but that would be incredibly expensive. Who's going to pay the bill? The taxpayers? Why?

Also, I was just reminded of parking: how the heck will self-driving cars deal with the current parking rules? It takes all my intelligence just to figure those out sometimes!

> There'll still be high-speed situations where the AI will fuck and kill someone, but I do think people would come to accept self-driving AI even if it occasionally fucks up and kills people, because it'll be world-changingly convenient.

But self-driving cars won't be world-changingly convenient. I don't drive. I don't have to: trains, buses, and occasionally planes take care of my transportation needs for me. I can already snooze on my way to work, get home when I'm drunk, etc. Who will these cars change the world for? Silicon Valley types who want to commute and go on vacation without bumping into anyone else? Because the poor certainly won't be able to afford them, country folks won't be able to use them, and Europeans will probably have no need for them. (Heck, they're still mostly on manual transmissions. And have you looked at the gas prices over there?!)

When we went from horses to trains (and eventually cars and planes), it made it possible for humans to travel across the entire world in a matter of hours. Self-driving cars won't be that kind of leap.

I think it would be much healthier for our cities to focus on developing public transit infrastructure. The best places I've ever lived have been threaded by a city-wide metro system, with commuter trains connecting to further regions. Pedestrians benefit. Suburbanites benefit. The city feels more compact and walkable. You can even get rid of some of the roads!


> What if it's somebody's pet?

Exactly a situation where you want the black-and-white AI solution. If it's a 1% chance the human dies and 100% chance the pet dies, the pet has to go every time.

> This isn't possible on all roads.

It doesn't have to be all roads to be transformational. Tag major city roads + self-driving cars + uber = Johnny Cab. Tag major transit routes + self-driving trucks = no sleepy truckers, no paying sleeping truckers, no rest breaks, on-demand cross-country trucking in 48 hours or less.

This, of course, ignores human desires for control and "freedom of the road" (at least in the US). Who knows how the technology would actually catch on, especially when Johnny Cab DOES run over Fluffy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: