Wikipedia is a bulwark against the Great Forgetting. As such it is doomed, but the effort is worthwhile; whatever survives will be random. But that is its Job One and there are no spare resources for a Job Two. It can only go on as it has been. It has no other option except to just give up.
One's first thought is that they ought to be running away from underwriting this as fast as they can go. But then one realizes that it is all profit -- they need never pay a claim, because in accidents involving autonomous vehicles, it will never be possible to establish fault; and then one sees that the primary purpose of most automations is to obscure responsibility.
I think there's a narrow unregulated space where this could be true. I'm exercising my creativity trying to imagine it - where automations are built with the outcome of obscured responsibility in mind. And I could understand profit as a possible driving factor for that outcome.
As an extreme end of a spectrum example, there's been worry and debate for decades over automating military capabilities to the point where it becomes "push button to win war". There used to be, and hopefully still is, lots of restraint towards heading in that direction - in recognition of the need for ethics validation in automated judgements. The topic comes up now and then around Tesla's, and impossible decisions that FSD will have to make.
So at a certain point, and it may be right around the point of serious physical harm, the design decision to have or not have human-in-the-middle accountability seems to run into ethical constraints. In reality it's the ruthless bottom line focused corps - that don't seem to be the norm, but may have an outsized impact - that actually push up against ethical constraints. But even then, I would be wary as an executive documenting a decision to disregard potential harms at one of them shops. That line is being tested, but it's still there.
In my actual experience with automations, they've always been derived from laziness / reducing effort for everyone, or "because we can", and sometimes a need to reduce human error.
You're not making any sense. In terms of civil liability, fault is attached to the vehicle regardless of what autonomous systems might have been in use at the time of a collision.
For Americans, the point of this is not even that this is what "conservatives" want; it is that, to the peasant mindset, this is the only way a society can be run or ever has been run, and the Afghanis are just being honest about it.
"...there's no reason to believe NASA would send astronauts on a mission knowing their lives are in danger..."
I'm sorry, which forty-year anniversary has just been all over the news? And Columbia was only different if you want to pick over the definition of "send".
Yes, as localized for the user who got the screenshot. I checked it now and got a result formatted normally for Canada (or the US for that matter). I imagine that European users would see it with dots instead.
He thinks they ought to converge. What does he think they ought to converge upon? How will he know that thing when he sees it? If he will know it when he sees it, why does he need help making it?
The answer to all of these, of course, is that convergence is not expected and correctness is not a priority. The use of an LLM is a boasting point, full stop. It is a performance. It is "look, Ma, no coders!". And it is only relevant, or possible, because although the LLM code is not right, the pre-LLM code wasn't right either. The right answer is not part of the bargain. The customer doesn't care whether the numbers are right. They care how the technology portfolio looks. Is it currently fashionable? Are the auditors happy with it? The auditors don't care whether the numbers are right: what they care about is whether their people have to go to any -- any -- trouble to access or interpret the numbers.
reply