There's an assumption here that you're making that I'm not sure you are aware of: that the system is good and that individuals "should" operate a certain way inside of it. Failure to do so is a fault of the individual, not the system.
This occurred to me reading your story of the guy who barely made it through school. You know, telling the story from the guy's point of view might be more useful. Here he is, struggling with the system, and over time and with enough effort he manages to claw his way into a future that he's created for himself.
Same goes for the CEOs "ruining" the company. Get a new executive job, the board of directors are a bunch of idiots, nobody wants to take ownership and the ship is going to sink one way or another. So you keep the ship afloat as long as possible. Meanwhile, since you've managed to keep the company viable much longer than most others could, you get a nice reward.
I'm not saying that these people aren't morally defective in some way, only that you, as the author, are applying a moral judgment to the story that the reader needs to be aware of. I could tell these same stories much differently.
From my own personal experience, I find I understand systems well enough when I can assume that all players are acting in good faith and intelligently. I don't need dumb people at the bottom or sociopaths at the top. In fact, that's the irony of it all: once you begin to make moral judgments about the qualities of various people in systems, it becomes much easier to behave in ways that most folks would find offensive.
I think you're being unfair to netcan here. Netcan is talking about system problems, not individual moral deficiencies. It doesn't really matter whether netcan assumes that these systems are good and individuals should operate a certain way inside of them. The point is that these systems are designed under that assumption, and that causes problems (for those systems). It's pretty clear that the story about the guy who barely made it out of school highlights flaws in school admission and employment recruitment mechanisms. By any reasonable objective measure, his high profile employment is contrary to the design goals of those mechanisms. Whether or not you agree with those goals, it is very plausible that the flaws are present because the designer made the implicit assumption that actors would (to some extent) agree with those goals, and would act "morally" from that point of view.
With the story about the executives, either they acted in the best interests of the company or they didn't. If they did, then there's no moral judgement to be made. If they didn't, then the implicit moral judgement was made by the board of directors who hired them (not netcan).
Netcan's point isn't that people at the top are sociopaths and people at the bottom are dumb. It's much more complicated than that. Bad systems set up moral hazards which incentivise sociopathic behaviour. Different people react to moral hazards in different ways. However, once one person (who may be morally deficient) starts following the incentives, many other people (most of whom are simply normal flawed humans) will follow suit. Thus, practically dealing with the problem means treating it as a flaw the design of the system, not in the individual actors.
>> From my own personal experience, I find I understand systems well enough when I can assume that all players are acting in good faith and intelligently.
So there's no room in your worldview for people at the top who act in their own self interest to the detriment of others and the company as a whole? And everyone that thinks they see that happening just doesn't understand the system well enough?
Certainly there are bad actors, everywhere you look.
The point here is that letting my own personal feelings or morality shade my analysis leads to poor results. Systems are bad even when full of good people. In fact, that's the hell of the thing. Good people in bad systems act in bad ways.
Confusing the matter with long-winded diatribes about the personal qualities of the various participants might make me feel good, but it is not helpful. If It makes you feel any better, you can also assume that the system is full of sociopaths. Works the same way. The key here is that you can't start confusing your own personal judgments about the morals of people inside the system with your opinion about the system as a whole.
There's a great discussion here about how a good person should act in a bad system. Resign? Speak out? But that discussion is also not related to analyzing the system as a whole.
>> The key here is that you can't start confusing your own personal judgments about the morals of people inside the system with your opinion about the system as a whole.
What if the judgement is that the system would be fine if it wasn't full of sociopaths?
>> Good people in bad systems act in bad ways.
That would make them bad people. For instance, before pollution laws came along, but it was known that polluting was bad, the people that did it were still bad people regardless of the system. The system had to be adjusted to take account of the bad actors.
Letting your own personal morality shade your analysis is very useful, we should have a system which encourages moral action rather than rewarding immoral action. Imagining a system full of bad actors gives us the ability to refine it.
We'll never agree on what's moral, of course, but that is an item for public debate.
Judgemental, much? I think you're presupposing that people have far more autonomy than they actually do in society and business. Moreover, you're making a judgement that any amount of sin automatically makes you a bad person, regardless of the circumstances.
Let's say you're a line worker at <big box retailer>. You notice the store manager embezzling petty cash. At the same time, you also know that the promise of anonymity in reporting the store manager is bullshit, and there's a high probability that you'll be fired long before the investigation of the store manager is completed. Meanwhile, if you do nothing, the store manager will get caught up in an audit eventually, and will get fired/prosecuted anyway. Do you report the store manager? Or do you go along, not participating, but not rocking the boat, either? Most importantly, does your moral calculus change if you have dependents who are reliant on your income for food, clothing and shelter?
You've come up with a huge grey area in what was a boolean argument.
The statement was pretty unequivocal - "Bad systems make good people act in bad ways".
If the actions (pollution in my example) are unequivocally bad (which they are) then regardless of the system the actors are bad people.
It's not clear in your argument that the worker is acting in a bad way in the grand scheme of things, he/she is basically passive which is a different thing. The manager is certainly a bad person. But I would also be the first to agree that any morality is mutable, hence the thought that systems should be adjusted on the will of society rather than individuals.
However -
>> I think you're presupposing that people have far more autonomy than they actually do in society and business.
Someone takes the decision to dump raw sewage in the river, or toxic waste or whatever. Regardless of how many layers of bureaucracy they hide behind, they are bad people.
>Someone takes the decision to dump raw sewage in the river, or toxic waste or whatever. Regardless of how many layers of bureaucracy they hide behind, they are bad people.
The problem is that it's only "bad" when you look at it from the outside. You can point to this person or that person and say, "You're bad," or "You're evil," and smugly congratulate yourself that you would never act in such a fashion. You don't know what pressures or incentives they were subject to. You don't know what other choices they had. Moreover, your approach puts the blame on the line workers - the worker dumping the oil into the storm drain, the truck driver driving 20 hours a day, the manager writing up a justification of why an unreasonable risk is reasonable after all - while assigning no culpability to the leadership of the organization that created the circumstances where those actions were the most reasonable alternative. You expect people to stand up and throw themselves into the gears of the machine even if the inevitable consequence is that they get crushed, and the machine rolls on, heedless.
I used to think in much the same fashion. But then I read a very powerful book: Diane Vaughan's Challenger Launch Decision. I would challenge you to read that book and identify, of all the people involved in the decision to launch the Challenger on that fateful day, which one acted in an evil manner? Which engineer wasn't at least trying to act in the best interest of the organization and the crew?
That's the point I'm trying to make. People can be trying to good, but, because of the information available to them, the incentives they operate under, and the time or social pressure they're subject to, their actions can work towards bad outcomes. Tarring those people as "bad" or "evil" does nothing to change the organization that those people operate under, and simply ensures that the next person in the same position in the organization just makes the same mistakes.
I think the real merit in pointing out widespread problematic individual behaviours is that they point to systemic problems. I agree that many systems are bad even when full of "good" people. However, I think some systems are bad precisely because they attract or select "bad" people. Some systems will even turn otherwise "good" people into "bad" people.
I mean, here we're saying "good" and "bad" without really defining terms. If "good" means acting socially responsibly then corporations are a great example of what I'm talking about. The shareholder-director framework is a system that allows very little room for socially responsible behaviour. Shareholders are human beings, with normal human values, but legally, the director must solely represent their financial interests or risk getting sued. For a CEO to act in a socially responsible way, they should believe that acting otherwise will reduce their profit (e.g. through damage to public image). Based on various scandals in the past, many people characterise CEOs as being sociopaths. The more interesting point is that the job itself is essentially sociopathic.
So, while I think the discussion about how a good person should act in a bad system is interesting, I'm much more interested in figuring out how to create systems which incentivise good behaviour in the long term. For example, why couldn't we have a corporate charter which explicitly valued long term growth or customer satisfaction or public image or job creation. That is, instead of just having a director with a goal that is "maximise PROFIT", have their goals be a more complex optimisation e.g. "maximise XPROFIT + YNUMBER_OF_EMPLOYEES_WITH_DECENT_LIVING_WAGE - Z*CARBON_FOOTPRINT". Wouldn't it be cool to invest in or work for a company that explicitly shared some of your own personal values?
You have an odd definition of good people. People who act in bad ways aren't good people, regardless of the system. Good people remove themselves from situations and systems that try and compromise their morals.
Correct, it's not always possible to find a different situation, but if the situation makes you act badly then you need to accept that you're just not a good person. You're a situationally good person which makes you not actually a good person. If your morals are set aside because your livelihood is at risk, then they aren't actually your morals are they? Truly good people accept the consequences of their morals and don't change them because it's easier.
So yea, we need good systems because quite frankly, most people aren't actually good people, they're only good when things are easy. Moral hazard is a serious problem in any system that needs to be addressed because people respond to incentives and aren't nearly as good as they think they are.
This is why a good social safety net is important for society, it keeps people from being forced into crime because of poverty. People should be able to walk away from bad situations and fall back on the safety net rather than being forced by the situation to be bad.
In my opinion, good people are people who do good things even when they do not expect it to benefit them to do so. Saintly people are people who do good things even when it will be ruinous to them. Many people are reasonably good people, but only a tiny fraction of people are saintly.
You are free to use different terminology, of course.
That's a very religious sounding notion; I'm an atheist, I'd call your saintly people good people and I'd call your good people simply human if such a differentiation is necessary.
I'm not a Christian, so I'm not using "saintly" in any theological sense, and I was using "good" to distinguish them from people who would only perform actions when they see a benefit to them. In my sense, a lot of people wouldn't interrupt what they're doing to stop at the side of the road and help someone whose car has broken down, figuring that it's not their problem and that someone else would handle it. A good person would stop and help as much as they could within reason. Only truly remarkable people would do things that go way above and beyond the call of duty, like driving the stranger that they had helped to and from work for a couple of weeks while their car was in the shop, driving well out of their way to do so. But I think it's still fair to call the kind of people who stop to help good people.
But like I said, this is mainly a terminology distinction, so I don't have any great problem with your version.
This occurred to me reading your story of the guy who barely made it through school. You know, telling the story from the guy's point of view might be more useful. Here he is, struggling with the system, and over time and with enough effort he manages to claw his way into a future that he's created for himself.
Same goes for the CEOs "ruining" the company. Get a new executive job, the board of directors are a bunch of idiots, nobody wants to take ownership and the ship is going to sink one way or another. So you keep the ship afloat as long as possible. Meanwhile, since you've managed to keep the company viable much longer than most others could, you get a nice reward.
I'm not saying that these people aren't morally defective in some way, only that you, as the author, are applying a moral judgment to the story that the reader needs to be aware of. I could tell these same stories much differently.
From my own personal experience, I find I understand systems well enough when I can assume that all players are acting in good faith and intelligently. I don't need dumb people at the bottom or sociopaths at the top. In fact, that's the irony of it all: once you begin to make moral judgments about the qualities of various people in systems, it becomes much easier to behave in ways that most folks would find offensive.
Just thought I'd point that out. Nice stories.