This is an echo of the “guns don’t kill people, people kill people” argument that has been going around for ages.
Let’s say you’re only using the software to flag suspicious behavior, and bringing in humans to make the final decision. What happens when (inevitably) the software disproportionally flags people with dark skin because it is not trained to recognize dark-skinned faces? Or when the software disproportionally flags poor people, or people with families?
It means that those groups of people will be targeted by the (human) bureaucracy and tasked with defending themselves, when they’ve done nothing wrong. Humans will inevitably trust the algorithm, they will use the algorithm’s outputs to justify their own biases, and even investigations come with a cost.
There’s this meme going around that the “algorithm isn’t biased, it’s the data”, but that argument doesn’t really hold water—machine learning systems, by default, learn to recognize correlations, and correlations in the real world collected with real sensors contain biases. ML, by its nature, picks up and encodes those biases, and you must make an effort to remove them—you can’t just throw an ML algorithm at a pile of data.
I don't think you meant to do this but you seem to have inadvertently made a very damn good argument for "guns don't kill people people kill people.
People are getting screwed because they are at the long end of an unbroken chain of crap. Crappy organizations buy crappy software and crappy professors take the results seriously. The fact that there exists a crappy tool that flags all the black people as cheaters (or whatever, point is that the false positives are unacceptable common and unacceptably distributed).
Blaming the gun (the software in this case) is tacitly condoning the unbroken chain of half a dozen people/entities that are failing to do the job they are being paid to do. The software developers shouldn't be building crap software. The companies shouldn't be selling crap software. The universities shouldn't be buying crap software. The professors shouldn't be using the results of crap software. To look at that situation and say "yeah the problem here is that this crap software exists" is beyond naive. The problem is that nobody is being accountable for the bad outcome. I'm not asking for a whipping boy or a scapegoat here, the problem is that when nobody can be held fully responsible it seems like nobody even gets held partly responsible.
This argument is fallacious. Your argument assumes that “blaming the gun” is condoning the users, and this is a false dilemma.
There’s really no room for absolutism, where blame is assigned to one source rather than distributed among many contributing factors. Imagine how dangerous air travel would still be if, after an accident, investigators looked for only one cause to blame, and tacitly condoned anything else they came across in their investigations.
Re-read my comment. My point is that fault is distributed sufficiently that accountability seems to evaporate. This is a systemic or organizational problem. Anti-cheating software is just the form this specific instance has taken.
>Your argument assumes that “blaming the gun” is condoning the users
Well until now you weren't throwing even the slightest bit of shade in their direction.
>and this is a false dilemma.
And you've created a false middle ground.
>Imagine how dangerous air travel would still be if, after an accident, investigators looked for only one cause to blame, and tacitly condoned anything else they came across in their investigations.
What's the difference between "dangerous harmful cheating software" and "cheating software that's being shoehorned into use cases in which it was never expected to be used"?
That's why you don't blame the (metaphorical) gun. Everything is just tools.
The FAA doesn't go off half cocked about the evils of grade-2 fasteners because once upon a time an engineer thought a grade-2 would be enough when he should have used a grade-5. I can't believe I have to defend (invasive to the point of being unethical) anti cheating software but these sorts of software tools are just tools and can be used either wisely or poorly. The software doesn't know or care how it's being used. In an industrial setting they can (and are, same underlying tech different companies) be used to design more effective interfaces to reduce errors (which I think we would all agree is a net positive contribution to the world).
The point is to not let the algorithm make decisions. The human bureaucracy is suppose to be there to determine the quality of the flags and analyze whether there is any discrimination at play. A company that lacks this human element is negligent and should be held responsible. Unless algorithms can be trialed and held accountable, they shouldn’t be allowed to make decisions.
Also guns don’t kill people, people do. Otherwise explain to me why it would be okay for certain institutions to be armed but not individuals. If guns are the problem, then no one should have them (including the military/police).
Just because you disagree with me, it doesn’t mean that I misunderstood the viewpoint I’m responding to.
> The point is to not let the algorithm make decisions.
And my response is—that’s not enough. It sounds like the algorithm, because it is biased, has the effect of increasing the bias in the whole system. If your response is that humans should work harder to counteract biases in machine systems, well, I think that’s just a way to CYA and assign blame but not a way to solve the problem—humans will remain biased, and they will trust automated systems even when that trust is misplaced.
As an analogy, it’s like a driver in a partially autonomous car. As soon as the automation takes over, the driver stops paying attention to the road. We can make a big fuss and production and talk about how it’s the driver’s fault, and the driver should pay attention, but we’ve placed them in a system where they are discouraged from paying attention, and the system is more dangerous as a consequence.
> Also guns don’t kill people, people do. Otherwise explain to me why it would be okay for certain institutions to be armed but not individuals. If guns are the problem, then no one should have them (including the military/police).
This is a false dilemma / false dichotomy. This argument assumes that EITHER access to guns is to blame OR people are to blame, but not both, but there are obviously other ways to think about the problem.
Any rational way to look at problems will look at multiple contributing factors.
Parent: The software itself shouldn't have any control over the student's grades. A person should have to review the flags and actually find some wrongdoing. Not just push 'yes' and walk away.
You: Let’s say you’re only using the software to flag suspicious behavior, and bringing in humans to make the final decision. What happens when (inevitably) the software disproportionally flags people with dark skin because it is not trained to recognize dark-skinned faces? Or when the software disproportionally flags poor people, or people with families?
Answer: A person should have to review the flags and actually find some wrongdoing.
You: It means that those groups of people will be targeted by the (human) bureaucracy and tasked with defending themselves, when they’ve done nothing wrong
Me: The human bureaucracy is suppose to be there to determine the quality of the flags and analyze whether there is any discrimination at play. A company that lacks this human element is negligent and should be held responsible.
> And my response is—that’s not enough. It sounds like the algorithm, because it is biased, has the effect of increasing the bias in the whole system.
Hence why the humans should be held responsible for not addressing bias in their system. And why the actions of an algorithm should be the responsibility of its creators.
> If your response is that humans should work harder to counteract biases in machine systems, well, I think that’s just a way to CYA and assign blame but not a way to solve the problem—humans will remain biased, and they will trust automated systems even when that trust is misplaced.
So...? What’s your solution? All you’re saying is that humans will remain bias, yeah they will. That’s why we have laws that punish discrimination and bias. If your company creates products (algorithms) that discriminate, you should be held responsible. The human element is not there to “work harder” but to assure that what you’re releasing works properly. If you don’t think increased accountability fixes the problem, please tell us what would be “enough”.
> This argument assumes that EITHER access to guns is to blame OR people are to blame, but not both
No assumption. If you think a cop can have a gun but a criminal can’t then the gun isn’t the problem. If you believe cops can have guns but civilians can’t then the main factor is the person with the gun and not the gun itself. This isn’t an argument against increased restrictions and if you believe no one should have guns (including the government) im all for it. But if you believe someone has the right to have guns while others don’t, im hard pressed to see any other determining factor except who has the gun.
Please make an effort to engage with the comments I make, rather than making guesses about my mental state.
> Me: The human bureaucracy is suppose to be there to determine the quality of the flags and analyze whether there is any discrimination at play. A company that lacks this human element is negligent and should be held responsible.
The human bureaucracy doesn’t do that very well. The human bureaucracy is deeply flawed and has limited skills. We can assign blame to the human bureaucracy for its failings all we want, but if we want to effect change then it’s necessary to include a broader range of factors in out fault analysis.
In other words, “assigning blame” is a low-stakes political game, and “root-cause analysis” is what really matters.
This is like the 737 MAX failures. You can say that it’s the pilot’s responsibility to fly the plane correctly—but the fact is, pilots have a limited amount of skill and focus, and can’t overcome any arbitrary failing of technology. So we rightly attribute the problem to the design of the system, of which the human is only one component.
This grading software is like the 737 MAX—it’s software that, as part of a complete system including non-software components like humans, does a bad job and needs repair. The 737 MAX reports listed something like NINE different root causes.
I don’t understand this absolutist viewpoint that the human bureaucracy is the ONLY thing that you need to protect you from bad software. There are multiple root causes, and the bad software is one of them.
> Hence why the humans should be held responsible for not addressing bias in their system. And why the actions of an algorithm should be the responsibility of its creators.
So you’re saying that there’s a problem with the software, and that we shouldn’t place all the blame on the college administrators? Isn’t that what I’m saying?
> But if you believe someone has the right to have guns while others don’t, im hard pressed to see any other determining factor except who has the gun.
I do believe that not everyone should have the right to own guns, but if you’re interested in arguing with me about it, I won’t engage. If the comparison doesn’t work for you, think of something less emotionally charged like the 737 MAX or the Tesla Autopilot—both are scenarios where we rightly cite the software / automation as a root cause in accidents.
> I don’t understand this absolutist viewpoint that the human bureaucracy is the ONLY thing that you need to protect you from bad software. There are multiple root causes, and the bad software is one of them.
There are multiple intermediate causes, and all of them are the responsibility of the human bureaucracy—including, to the extent it contributes, the selection, use, and failure to correct bad software—and all of them stem from one root cause, to wit, that the bureaucracy faces insufficient consequences for it's failures and thus lacks motivation to do it's job well.
Now, were the analysis being performed on behalf of the bureaucracy because they had decided to do their job, rather than being part of a discussion outside of them, the causes which are intermediate from a global perspective would be root causes, sure. Context matters.
I will agree that certain biases are almost certainly baked into the software and that they will disadvantage anyone (and any situation) that isn't considered 'normal' by the software's creators.
If your argument is that they should be paying a person to sit here and watching a class of students while they're doing the tests, I'm not against that. They probably should.
But humans will always attempt to make their own work easier and less time-consuming, and this is a tool for that. Eventually, something like this is going to exist for distance learning. This is unlikely to be the final configuration of that tool, but it's a step on that road, no matter how much people don't like it.
What's needed are proper controls on the usage of the tool. And proper training. And proper oversight.
If your argument is for something else instead of the above, then I don't know what your solution would be. "Don't have school" and "don't worry about cheating" aren't acceptable.
Let’s say you’re only using the software to flag suspicious behavior, and bringing in humans to make the final decision. What happens when (inevitably) the software disproportionally flags people with dark skin because it is not trained to recognize dark-skinned faces? Or when the software disproportionally flags poor people, or people with families?
It means that those groups of people will be targeted by the (human) bureaucracy and tasked with defending themselves, when they’ve done nothing wrong. Humans will inevitably trust the algorithm, they will use the algorithm’s outputs to justify their own biases, and even investigations come with a cost.
There’s this meme going around that the “algorithm isn’t biased, it’s the data”, but that argument doesn’t really hold water—machine learning systems, by default, learn to recognize correlations, and correlations in the real world collected with real sensors contain biases. ML, by its nature, picks up and encodes those biases, and you must make an effort to remove them—you can’t just throw an ML algorithm at a pile of data.