If a pedestrian gets flung in front of your car and then you no longer see them, wouldn't you be inclined to check whether they might have gone under your car? Especially if you're hearing screaming...
Driver behavior after collisions isn't remotely that rational. People do all kind of crazy things in extremis (secondary collisions caused by post-collision drivers mistakenly hitting the accelerator instead of brake are a whole category of accident!). And in any case (again, as I understand the accident) this was stopped traffic and the Cruise vehicle was hit by another vehicle. In that kind of situation I think a human driver would be hard pressed to know a pedestrian was nearby at all.
From Cruise's statements and news reports, it does not sound like the Cruise car was hit by another vehicle. A nearby vehicle flung a pedestrian in front of a Cruise. The Cruise car stopped on top of the pedestrian, then dragged them as it pulled over. Even accounting for post-crash panic, I find it unlikely that a decent human driver wouldn't have realized that the pedestrian ended up underneath their car.
But what if I belligerently insist that the average human driver would react in the worst possible way? I am trying to appeal to your angriest moments while behind the wheel of a car and get you to believe that your impression of other drivers then is true of most human drivers everywhere, all the time
That's not a reasonable argument. The point isn't "The robot is good because you can imagine a human making the same mistake", it's "We wouldn't think this accident was notable at all if it were a human driver, so we at least recognize we're holding the robot to a higher standard".
The truth is that virtually every traffic accident can be prevented on some level by every vehicle involved. People (you included, I assume?) who want to argue against automation in all cases will always be able to find something wrong with "the robot". Always. But that's not the right standard to be applying, because those accidents were going to happen anyway. The question is merely if there will be more or less of them with "robots" at the wheel.
You're right, Waymo and Cruise and Venture Cap are on a moral crusade to prevent driving fatalities, this isn't about high margin robotaxis and delusional hopes of accidentally creating AGI with a CV NN. And I agree with you, what the cruise vehicle did in this case, a human would totally do exactly the same thing. I mean humans are fucking neanderthals, have you ever seen one behind the fucking wheel of a car? They can't go more than 30 seconds without nearly crashing, and they only know how to drive in a 4km^2 zone. I personally know that every time I let my tesla go FSD, I'm only intervening every 90 seconds because of the unreasonably high standards I hold it to. Self driving cars only need to make the right decision like ~80% of the time. That's good enough, it's better than most human drivers.
And by the way, there's a cheaper and more effective way to reduce traffic fatalities, and it uses fewer precious resources, energy and land, emits less carbon dioxide, and stimulates economies but it's some kind of stealthtech from the 1800's. (choo choo)
> I personally know that every time I let my tesla go FSD, I'm only intervening every 90 seconds because of the unreasonably high standards I hold it to. Self driving cars only need to make the right decision like ~80% of the time. That's good enough, it's better than most human drivers.
That's only good enough if the type of the wrong decisions it makes is the same as the ones a human would make. Otherwise the car will seem unpredictable to human drivers and therefore dangerous.
If you want to do a good job of generating text, you have to develop a model of how the world works. For example, if you describe an experiment from a paper to ChatGPT and ask it to generate the results section of the paper, then ChatGPT probably needs to understand the phenomenon that the experiment is about and be able to model it to some degree in order to generate plausible results. If you think about ChatGPT in this way, then it is not just a text generator, but a world simulator. The more accurately you can simulate the world, the better you can generate text. I think this is where the model's size and complexity comes from. ChatGPT needs to know as much as it can about basically everything.
Putting it more generally, the difficulty of a computation isn't necessarily correlated to the filesize of the end product of that computation. Imagine simulating the entire world to try to predict what next week's lottery drawing numbers are going to be. Would require an unimaginable amount of data and computation, yet the output will be just a couple numbers.
This could be a niche search engine for advanced users, but I feel that to average users this would be a strictly worse experience because they have no interest in figuring out how to "select an algorithm." They expect to type a query and the result they want comes up at the top. The search engine should be able to infer whether the user wants results that are more recent, more interactive, more authoritative, etc. based on the query, past search behavior, and anything else that is known about the user. If that isn't happening, the search engine needs to be made "smarter."
The way it's likely seen by many tech companies is that the more decisions you have to offload to the user, the less advanced your system is. An old car with a lot of knobs and dials and levers is not more advanced than a self-driving car with a single control, select destination. Some people will still prefer the old car, but it will be a niche market.
I agree, it’s the classic example of “people don’t really like choice”. When you are presented with the option of a search algo, there’s always this feeling that you chose wrong. It’s an additional thing to think about, and makes things more complicated.
To some extent that's already implemented in the form of the various tabs - if you want very recent results, go to the News tab or click the Tools button and select a time range. If you want academic results with citations, go to Google Scholar. If you want information from books, go to the Books tab. If you don't know/care, the main tab will mix results from all tabs.
You can add three more buttons on top of the tabs that are already there, but would their value for more advanced users be worth the extra friction/confusion for average users? And how would it scale if someone wants a fourth, fifth, or tenth "lens"?