Most criticisms of banking as a whole are largely about the moral hazard issues. Seeking out and creating lopsided risk that can be partially offloaded onto the public while the rewards to those risks can be captured by the bank.
There is a fantastic web series called the "Gervais Principle: The Office According to The Office.(1)" The central thesis is that organizations have three types of employees: Sociopaths at the top. Clueless in the middle. Losers on the bottom. The defining characteristic of the Sociopaths elite is 'the basic heads-I-win-tails-you-lose pattern behind all Sociopath machinations.'
Strangely, this fun piece changed how I look at moral hazard. Before, I thought of it mostly as an inevitable consequence of badly designed systems. Similar to employee reward schemes that employees learn to manipulate in a way that is detrimental to the business or cobra bounties intended to reduce the number of snakes in a city leading to backyard cobra farms. I thought it was less of a problem in emergent, evolved human systems. After reading this piece, I think of it differently. It's a human psychology thing. Human systems have assumptions about fair play built into them. Psychologically perverse human (sociopaths) can take advantage of these assumptions.
I knew a guy from Uni. He was doing a Research Masters, badly. He failed. Resubmitted. Appealed. Rewrote. Appealed, and eventually got a pass. He wasn't really smart enough and he never understood the subject. But, universities (especially graduate programs) don't really fail students. Students drop out, especially the ones who struggle. Persistence can substitute for smarts and he walked away with Masters from a top uni. I later learned that he got into the Uni in the fist place with some hack. He applied to a second tier uni, and used some clever clauses to transfer to the better one without having to go through the front door admissions process.
We then proceeded to hound every big name in the field for references. None of them wanted to read this pile of crap but again persistence can substitute for quality and he eventually got some nice references and quotes. He used these along with a professionally written resume and application letters ($450) to land a sought after graduate job. Big government body that takes at least 12 months to fire someone. Excellent entry into the world of public policy research. Before that happened, he had found a recruiter who specializes in these guys. He moves jobs every 6-12 months which means a recurring bounty for the recruiter and a raise for the sociopath. On getting a job they immediately started planning for the next one, like a heist. He was so confident. It took a remarkably long time for bosses & coworkers to figure out exactly how completely useless he was. These people are rare. The systems assume they don't exist and sociopaths take advantage of that weakness.
Around the same time I was hearing the story of a largish company (~$0.5b) bought by a hedge fund and placed in the hands of an 'A-Player' team of senior executives. I knew a manager and long term (20yrs) employee there who narrated the story for me over a couple of years. Outsiders to the company and industry, making salaries an order of magnitude higher than the previous executive team. They proceeded to ruin the company. But, they managed to conceal a lot of that. Milking or liquidating assets. SUing something like a leveraged buyout to raise massive funds to acquire companies. The owners eventually figured it out. The abysmal CEO and his team of executives were paid to leave and every one landed a CEO/COO job at an even bigger company. If they had happened on a win, I'm sure they would have done even better.
After reading 'The Gervais Principle' I saw a bunch of examples of these 'sociopathic machinations' that are much more available to the average person. Just about any mid level manager at any company can take big risks. Draw up massive plans for expensive expansions. Pitch advertising campaigns. Offer to take on impossible product development. Propose to open a new office in another country. Promise huge benefits and demand huge resources. Say you are 100% sure it will work. You might get what you ask for. If you get it, it starts. If you don't, whats the worst that can happen? Maybe your proposals get shot down by a sane senior. That might hurt your status marginally. No big deal. You're more likely to walk away with a small win for initiative. If you get the resources and proceed to balls up your project, you can always quit. If you are in a position to find a similar job, thats not a big loss. If you hit the eject button at the appropriate time and still walk away with a win. You can time it so you are negotiating to be poached while running this doomed program. If it all works out, you are the author of a coup.
A normal company is not resilient to this type of attack because its pretty rare and it's not usually identified if it does happen. Companies are based on an assumption that people aren't like that. The normal deal is that the company takes on the relatively small risk of employees being unproductive. In exchange they take all the upside. A sociopath flips it, taking on massive risk the company is on the hook for and capture some of the rewards. They're own risk is capped at a pretty low level so the more reward the better, regardless of risk. In an extreme case they might lose their job and take a reputation hit, but most likely they'll just leave and get a higher salary elsewhere. It's like taking money out of a company account to the casino. If you double it, you keep the profit. The moral hazard is always there. Most people just aren't corrupted by it.
Moral hazard is the right word for this. Morality is what prevents it in normal circumstances. Sociopath is a bad word for it, because it implies some sort of born pathology. That probably exists, but more often it is a learned behavior. Human morality is absorbed from the environment. If you get a bunch of these sociopaths together, they will rub off on other people. That's how sociopaths breed. It's like a cancerous meme.
There's an assumption here that you're making that I'm not sure you are aware of: that the system is good and that individuals "should" operate a certain way inside of it. Failure to do so is a fault of the individual, not the system.
This occurred to me reading your story of the guy who barely made it through school. You know, telling the story from the guy's point of view might be more useful. Here he is, struggling with the system, and over time and with enough effort he manages to claw his way into a future that he's created for himself.
Same goes for the CEOs "ruining" the company. Get a new executive job, the board of directors are a bunch of idiots, nobody wants to take ownership and the ship is going to sink one way or another. So you keep the ship afloat as long as possible. Meanwhile, since you've managed to keep the company viable much longer than most others could, you get a nice reward.
I'm not saying that these people aren't morally defective in some way, only that you, as the author, are applying a moral judgment to the story that the reader needs to be aware of. I could tell these same stories much differently.
From my own personal experience, I find I understand systems well enough when I can assume that all players are acting in good faith and intelligently. I don't need dumb people at the bottom or sociopaths at the top. In fact, that's the irony of it all: once you begin to make moral judgments about the qualities of various people in systems, it becomes much easier to behave in ways that most folks would find offensive.
I think you're being unfair to netcan here. Netcan is talking about system problems, not individual moral deficiencies. It doesn't really matter whether netcan assumes that these systems are good and individuals should operate a certain way inside of them. The point is that these systems are designed under that assumption, and that causes problems (for those systems). It's pretty clear that the story about the guy who barely made it out of school highlights flaws in school admission and employment recruitment mechanisms. By any reasonable objective measure, his high profile employment is contrary to the design goals of those mechanisms. Whether or not you agree with those goals, it is very plausible that the flaws are present because the designer made the implicit assumption that actors would (to some extent) agree with those goals, and would act "morally" from that point of view.
With the story about the executives, either they acted in the best interests of the company or they didn't. If they did, then there's no moral judgement to be made. If they didn't, then the implicit moral judgement was made by the board of directors who hired them (not netcan).
Netcan's point isn't that people at the top are sociopaths and people at the bottom are dumb. It's much more complicated than that. Bad systems set up moral hazards which incentivise sociopathic behaviour. Different people react to moral hazards in different ways. However, once one person (who may be morally deficient) starts following the incentives, many other people (most of whom are simply normal flawed humans) will follow suit. Thus, practically dealing with the problem means treating it as a flaw the design of the system, not in the individual actors.
>> From my own personal experience, I find I understand systems well enough when I can assume that all players are acting in good faith and intelligently.
So there's no room in your worldview for people at the top who act in their own self interest to the detriment of others and the company as a whole? And everyone that thinks they see that happening just doesn't understand the system well enough?
Certainly there are bad actors, everywhere you look.
The point here is that letting my own personal feelings or morality shade my analysis leads to poor results. Systems are bad even when full of good people. In fact, that's the hell of the thing. Good people in bad systems act in bad ways.
Confusing the matter with long-winded diatribes about the personal qualities of the various participants might make me feel good, but it is not helpful. If It makes you feel any better, you can also assume that the system is full of sociopaths. Works the same way. The key here is that you can't start confusing your own personal judgments about the morals of people inside the system with your opinion about the system as a whole.
There's a great discussion here about how a good person should act in a bad system. Resign? Speak out? But that discussion is also not related to analyzing the system as a whole.
>> The key here is that you can't start confusing your own personal judgments about the morals of people inside the system with your opinion about the system as a whole.
What if the judgement is that the system would be fine if it wasn't full of sociopaths?
>> Good people in bad systems act in bad ways.
That would make them bad people. For instance, before pollution laws came along, but it was known that polluting was bad, the people that did it were still bad people regardless of the system. The system had to be adjusted to take account of the bad actors.
Letting your own personal morality shade your analysis is very useful, we should have a system which encourages moral action rather than rewarding immoral action. Imagining a system full of bad actors gives us the ability to refine it.
We'll never agree on what's moral, of course, but that is an item for public debate.
Judgemental, much? I think you're presupposing that people have far more autonomy than they actually do in society and business. Moreover, you're making a judgement that any amount of sin automatically makes you a bad person, regardless of the circumstances.
Let's say you're a line worker at <big box retailer>. You notice the store manager embezzling petty cash. At the same time, you also know that the promise of anonymity in reporting the store manager is bullshit, and there's a high probability that you'll be fired long before the investigation of the store manager is completed. Meanwhile, if you do nothing, the store manager will get caught up in an audit eventually, and will get fired/prosecuted anyway. Do you report the store manager? Or do you go along, not participating, but not rocking the boat, either? Most importantly, does your moral calculus change if you have dependents who are reliant on your income for food, clothing and shelter?
You've come up with a huge grey area in what was a boolean argument.
The statement was pretty unequivocal - "Bad systems make good people act in bad ways".
If the actions (pollution in my example) are unequivocally bad (which they are) then regardless of the system the actors are bad people.
It's not clear in your argument that the worker is acting in a bad way in the grand scheme of things, he/she is basically passive which is a different thing. The manager is certainly a bad person. But I would also be the first to agree that any morality is mutable, hence the thought that systems should be adjusted on the will of society rather than individuals.
However -
>> I think you're presupposing that people have far more autonomy than they actually do in society and business.
Someone takes the decision to dump raw sewage in the river, or toxic waste or whatever. Regardless of how many layers of bureaucracy they hide behind, they are bad people.
>Someone takes the decision to dump raw sewage in the river, or toxic waste or whatever. Regardless of how many layers of bureaucracy they hide behind, they are bad people.
The problem is that it's only "bad" when you look at it from the outside. You can point to this person or that person and say, "You're bad," or "You're evil," and smugly congratulate yourself that you would never act in such a fashion. You don't know what pressures or incentives they were subject to. You don't know what other choices they had. Moreover, your approach puts the blame on the line workers - the worker dumping the oil into the storm drain, the truck driver driving 20 hours a day, the manager writing up a justification of why an unreasonable risk is reasonable after all - while assigning no culpability to the leadership of the organization that created the circumstances where those actions were the most reasonable alternative. You expect people to stand up and throw themselves into the gears of the machine even if the inevitable consequence is that they get crushed, and the machine rolls on, heedless.
I used to think in much the same fashion. But then I read a very powerful book: Diane Vaughan's Challenger Launch Decision. I would challenge you to read that book and identify, of all the people involved in the decision to launch the Challenger on that fateful day, which one acted in an evil manner? Which engineer wasn't at least trying to act in the best interest of the organization and the crew?
That's the point I'm trying to make. People can be trying to good, but, because of the information available to them, the incentives they operate under, and the time or social pressure they're subject to, their actions can work towards bad outcomes. Tarring those people as "bad" or "evil" does nothing to change the organization that those people operate under, and simply ensures that the next person in the same position in the organization just makes the same mistakes.
I think the real merit in pointing out widespread problematic individual behaviours is that they point to systemic problems. I agree that many systems are bad even when full of "good" people. However, I think some systems are bad precisely because they attract or select "bad" people. Some systems will even turn otherwise "good" people into "bad" people.
I mean, here we're saying "good" and "bad" without really defining terms. If "good" means acting socially responsibly then corporations are a great example of what I'm talking about. The shareholder-director framework is a system that allows very little room for socially responsible behaviour. Shareholders are human beings, with normal human values, but legally, the director must solely represent their financial interests or risk getting sued. For a CEO to act in a socially responsible way, they should believe that acting otherwise will reduce their profit (e.g. through damage to public image). Based on various scandals in the past, many people characterise CEOs as being sociopaths. The more interesting point is that the job itself is essentially sociopathic.
So, while I think the discussion about how a good person should act in a bad system is interesting, I'm much more interested in figuring out how to create systems which incentivise good behaviour in the long term. For example, why couldn't we have a corporate charter which explicitly valued long term growth or customer satisfaction or public image or job creation. That is, instead of just having a director with a goal that is "maximise PROFIT", have their goals be a more complex optimisation e.g. "maximise XPROFIT + YNUMBER_OF_EMPLOYEES_WITH_DECENT_LIVING_WAGE - Z*CARBON_FOOTPRINT". Wouldn't it be cool to invest in or work for a company that explicitly shared some of your own personal values?
You have an odd definition of good people. People who act in bad ways aren't good people, regardless of the system. Good people remove themselves from situations and systems that try and compromise their morals.
Correct, it's not always possible to find a different situation, but if the situation makes you act badly then you need to accept that you're just not a good person. You're a situationally good person which makes you not actually a good person. If your morals are set aside because your livelihood is at risk, then they aren't actually your morals are they? Truly good people accept the consequences of their morals and don't change them because it's easier.
So yea, we need good systems because quite frankly, most people aren't actually good people, they're only good when things are easy. Moral hazard is a serious problem in any system that needs to be addressed because people respond to incentives and aren't nearly as good as they think they are.
This is why a good social safety net is important for society, it keeps people from being forced into crime because of poverty. People should be able to walk away from bad situations and fall back on the safety net rather than being forced by the situation to be bad.
In my opinion, good people are people who do good things even when they do not expect it to benefit them to do so. Saintly people are people who do good things even when it will be ruinous to them. Many people are reasonably good people, but only a tiny fraction of people are saintly.
You are free to use different terminology, of course.
That's a very religious sounding notion; I'm an atheist, I'd call your saintly people good people and I'd call your good people simply human if such a differentiation is necessary.
I'm not a Christian, so I'm not using "saintly" in any theological sense, and I was using "good" to distinguish them from people who would only perform actions when they see a benefit to them. In my sense, a lot of people wouldn't interrupt what they're doing to stop at the side of the road and help someone whose car has broken down, figuring that it's not their problem and that someone else would handle it. A good person would stop and help as much as they could within reason. Only truly remarkable people would do things that go way above and beyond the call of duty, like driving the stranger that they had helped to and from work for a couple of weeks while their car was in the shop, driving well out of their way to do so. But I think it's still fair to call the kind of people who stop to help good people.
But like I said, this is mainly a terminology distinction, so I don't have any great problem with your version.
I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined. Some are clever and diligent -- their place is the General Staff. The next lot are stupid and lazy -- they make up 90 percent of every army and are suited to routine duties. Anyone who is both clever and lazy is qualified for the highest leadership duties, because he possesses the intellectual clarity and the composure necessary for difficult decisions. One must beware of anyone who is stupid and diligent -- he must not be entrusted with any responsibility because he will always cause only mischief.
At a prior (very larger) company, someone described the path to career success as "Jumping into a lake, creating as big a splash as possible, and getting out of the lake before the water comes down."
I think that summarizes your point. :-)
To answer, "How do we deal with it?" Make the owners and lenders of firms responsible. If a CEO or other employee screws up a company, take the shareholders to 0, then take the bondholders to 0, then take the counterparties to 0. People won't want to do business with people taking crazy risks. The market knew that AIG was gambling bigtime. If folks thought their trades might go to 0, AIG would have never been given so much rope.
It's a tough issue though. There's game theory behind it too. If everyone makes the same high risk trade, and you don't, you can't win. If they're right - they get paid, and you get fired for underperformance. If they're wrong, the system collapses and nobody gets paid.
> Make the owners and lenders of firms responsible.
Here's the trouble with that: You haven't solved the mundane version where a mid level manager destroys a division of a large corporation but not the entire corporation. In that case, shareholders losing a large chunk of money is what already happens today. They lose the value of the entire division, and they therefore have a large incentive to prevent it. But it still happens. The shareholders in general have devised no universally effective way to prevent it.
If you increase their exposure, you haven't given them any new defense to the attack, so all you really end up doing is encouraging them not to invest in stocks because stocks now have increased risk over other investment securities.
Your proposal is, essentially, to put incentives in place for the market to solve the problem, and then you assume without proof that with the right incentives the market will solve the problem. But someday somewhere you actually need a solution which parties wanting to avoid being damaged can employ to solve the underlying problem, ideally without overcompensating and causing parties to become unduly risk-averse.
In theory (stress theory) the shareholders (represented by the board) put a CEO in who establishes systems to minimize the risks of mini blow ups. But if the govt doesn't have to bail out a firm, who outside it should worry? If an overly ambitious marketing Manager at Colgate wastes 200 million dollars on a stupid idea, it's not my problem as an outsider.
As a corporate insider I can encourage systems where people have the right balance between risk taking and risk aversion. (Most large companies are too risk averse. Try implementing a change initiative at one.)
> In theory (stress theory) the shareholders (represented by the board) put a CEO in who establishes systems to minimize the risks of mini blow ups.
In theory. But if that isn't what happens in practice then what good is it?
> But if the govt doesn't have to bail out a firm, who outside it should worry?
You don't have to worry about the shareholders of some individual mismanaged corporation, what you have to worry about is that the principle you're trying to apply to prevent systemic risk has an established track record of being ineffective when applied to prevent internal risk, and it is not clear why you should expect any different result.
Again, the trouble is that you need a solution to the underlying problem. What "systems to minimize the risks" are there that corporations could cost effectively employ but are not already in place? How do you fix the problem that in a competitive market a company that spends resources on managing risks will be at a short-term disadvantage against a company that takes blind risks, and may consequently not survive long enough to see the day when its competitor has to pay the price for its risk taking?
I agree that too big to fail is an unacceptable piece of public "policy". If shareholders, bondholders & counterparties can't be taken to 0 without collapsing an economy, we have a dangerous moral hazard.
But I think that assymetric risk and moral hazard is just a reality of the world sometimes. It can't be completely eliminated.
They can't be eliminated completely, but it can be reduced. Think of the Lemon problem with cars. [1] The seller has a lot more information than the buyer. The buyer can never know as much as the buyer on what's wrong. In theory it becomes impossible to sell good cars because of the lemons.
One way to solve this is to have laws where you can go back to the seller. Another way is to have a 3rd party provide a warranty.
Similarly in Financial Markets, you can reduce the moral hazard with smart design. You'll still have some residual, but such is the nature of human interactions.
Yes. Sometimes people forget than money already spent is a sunk cost. This is endemic in IT projects. "We have to get this in, we've already spent $20 million, and they'll fire us if we have nothing to show for it." Famous last words...
"someone described the path to career success as "Jumping into a lake, creating as big a splash as possible, and getting out of the lake before the water comes down.""
In the UK, I've overheard the term "Seagull Management" used; "Fly in, shit everywhere, fly off".
This is an interesting post, and I really like it. Thanks for writing it all down for us.
A mostly off-topic anecdote: this makes me think of my first job, as an intern. I was working for a senior tech manager, and I remember this moment clearly. I had only been working for the company for a few weeks. We were in his office, and he received a phone call. He was on the phone for a while, and I could hear him committing to some things. After he got off the phone he explained the call to me and explained his approach to working in the corporation.
The person who had called him was a peer from another manager. The person had asked him to help with another project that they were working on. I could hear on the call that he verbally agreed to whatever it was that they were asking. He was quite agreeable. However, he told me afterward, that he had no intention of following through. He pointed to an org chart on the wall. There were several people on the chart highlighted. He proceeded to explain the rules: I would have two people that I must impress: my boss's boss, and my boss's boss's boss. To everyone else, I must be amiable and agreeable but to never actually do work. I must present myself well and to say interesting things in meetings. But, my work should be directed at helping those two people achieve their career goals while being as visible as possible in the process.
I thought this was clearly a cynical way of living. However, it was quite successful for this guy. I stayed on with the company for several years after I graduated. During that time, my mentor had been promoted to VP and then general manager. With his help I was promoted to his old position.
I don't think that these types of people are all that rare, unfortunately.
In the end, I had way too much 'moral fiber' to continue on that path. Taking his sociopathic approach always grated internal to me - it just felt wrong. I didn't have enough motivation for success to quell the feeling. I ended up abandoning management (even though I thought myself a fairly effective manager and leader), and going back to be a 'loser' at the bottom.
Luckily for me, the tech world has grown enough in the time since that demand for our loser jobs is really high. I am now able to do fun things, develop projects, feel good about the work that I do, and in the end actually make a really good living. However, most people aren't in such a situation. They must play this absurd game to get ahead.
> I thought this was clearly a cynical way of living. However, it was quite successful for this guy. I stayed on with the company for several years after I graduated. During that time, my mentor had been promoted to VP and then general manager. With his help I was promoted to his old position.
Maybe I'm misunderstanding what you're saying here, but I don't see it as being all that bad.
Sometimes when I go to meetings I don't know what the overall "strategy" is of my boss. I don't want to stay things that might fuck up his plans, so if a client is asking me whether we are doing X and we should be doing X, I don't immediately say "no that's a bad idea" even if I think so, because I know the value of presenting a unified front.
Behind the scenes I can tell my boss that I don't think doing X is a good idea, but ultimately people tend to be wishy-washy and showing a lack of confidence or a lack of cohesion with your team is a bad thing in and of itself.
what he got working is not a bad thing. In fact that is how people in a chain of command should act. Impressing your chain of command means, achieving what they want done quickly and efficiently which inturn means what the company's big plan needed your team to achieve. You might not know the big plans of the company's heads, but as long as you get done what are asked to do you can't go bad. Say the task other team who asked for help got for him might be of a lesser priority for the company (if it was a higher priority it would have came through his chain of command that he help these folks) he would be screwing up the company plans if he decided to help because of his misguided attempt to "help" the overall company. His only bad thing was not helping other IF he has some free time. that might have helped him reach the GM role even faster, instead of just looking good infront of other teams they would have talked about him good too.
A lot of this depends on the type of organization. Some organizations are very top-down, with corporate metrics and P&L flowing down a hierarchy. In firms like this, the approach your boss mentioned works very well. As long as everyone is working to make things happen up their management chain, the organization survives. The nice thing is people tend to have control over what they're accountable for. The down side is teamwork is not incentivised.
Some organizations are very much about cross-organizational teamwork. The bosses have very broad spans of control, and they expect their underlings to work things out between each other. These cultures promote teamwork. The downside is accountability sometimes gets lost.
What's interesting is both types of firms sound similar from the outside. Everyone talks about the importance of teamwork. Once you get inside, or see how they work, you realize it's not always that way.
A couple examples... In general banks generally support the "As long as you hit your # and you make your boss look good, you're ok" mindset. Some firms (Goldman Sachs - like them or not) do find a way to cooperate, and bring the entire breadth of the firm to their customers. Others are much more of a "Every desk for themselves" mindset.
Some consulting firms tend to be good about bringing the best experts from around the world to solve problems. (McKinsey for example, again like them or not) Others are very local office driven, with everyone worrying about their own P&Ls.
I've run into the Gervais principle in practice too, and I have a different thought. It's a thought that I find very depressing.
It seems to me that the vast majority of people are timid, conservative, and simply too afraid to do anything. The sociopathic business hustler types I've met may be awful people, total assholes who stomp on everyone they can, but they also are unafraid. They're initiators. They make things happen.
It seems sometimes like there's two kinds of people in the world: monsters and the timid. Heroes only exist in comic books.
If this is true, then perhaps societies and economies tolerate the antics of the sociopaths because they are the only people who initiate new actions on a large scale. I definitely see that tolerance in peoples' attitudes toward "captains of industry" who have obvious narcissistic or sociopathic leanings: "sure he's an arsehole but without him there would be no X."
That's just not true. Personality traits are pretty well studies, and you don't just get monsters and the timid.
There's plenty of people who mean well, and are "doers".
Personality traits tend to be normally distributed. There's a few heroes, a few monsters, and a lot of in-betweeners; even if you're just looking at people with low anxiety and low agreeableness.
And there's a reason why a lot of those "in between" movers and shakers don't try to take down the assholes - it's because these assholes generally go down in flames, eventually. It's generally not from some Batman figure taking them on, but a quantum of their peers going "nope, fuck that guy" when they temporarily hit a wall, and need someone to throw them a line.
Based on the pitch/plan of the manager of the Stamford branch the C level executives plan to close the Scranton branch and make the manager of Stamford the head of the northeast division. The Stamford manager leverages this new potential position to get a C level position at Staples thus destroying all plans that Dunder Miflin had made.
This example isn't cited in the Gervais Principle series but is a classic example of the sociopath layer devising a heads I win tails you lose scenario
Libertarians find it ridiculous and innocent to think that moneyed people or corporations are not going to lobby, bribe, manipulate and otherwise take advantage of a political system that can arbitrarily create subsidies, trade barriers, zoning laws, etc. If the opportunity to profit by corrupting the system exists, it will happen. Appealing to morality is naive. So is running around trying to plug the holes in our laws and tweak incentives. The only way to plug the whole is to take away the opportunity by constraining and defunding the government, reducing their ability to be corrupted.
Neo-Marxists find it ridiculous to think that we can have a handful of billionaires & corporations controlling all the wealth in a country without that handful manipulating and controlling the political system. Big money has always come hand in hand with power. It's silly to expect morality to fix this. The super wealthy will control government. We can either be ruled by the wealthy or we can eliminate wealth of this magnitude. Those are the only choices.
Personally I think both of these are naive, in much the same way. They are modernist ideas that assume an understanding we just don't have. Human systems are complex and cannot be designed. The idea that we can start with a limited set of laws and a small government and let everything else emerge, and that the consequences of that emergence (including the political ones that may derail it) will be acceptable is not that different from the old communist 10 year plans.
Ideas like skin-in-the-game are in that category, IMO. Useful as a perspective. Sometimes useful in practice. Not realistic as a solution in many/most cases. Sometimes the world present opportunities for asymmetric risk. Even if we go by the Hammurabi code suggested by Nassim Taleb (currently promoting this idea) of killing an architect who's building collapses there is potential for asymmetric risk. Maybe its small and asymmetric enough to be worth it (5% chance of killing 100 people; huge commission; the architect is old anyway) Maybe the immortal legacy of building a temple is worth the risk of death to the architect, but not the risk of death to the worshipers.
My point was that I don't think we can build systems (or meta systems that let systems emerge) that don't have a risk of moral hazard. Moral hazards exist. Usually morality is the anecdote. Sometimes it isn't
I think the big problem with designing human systems is that as systems evolve there is a parallel unpredictable cultural evolution.
I guess the only reasonable way to design human systems is through a slow process of evolution and iterative design, at each point trying to modify rules to adapt to the culture that develops around how people use the system as it exists. When a system is fully formed, along with the system rules there are also a raft of social norms that determine how it functions within society. Moral hazards create situations where individuals are incentivised to break the social norms that align with the goals of the system. Once norms get eroded, that behaviour can become commonplace, and you end up with broken systems.
Unfortunately, the meta-systems we have in place restrict what changes are possible/incentivised. Also, people tend to have a limited imagination when it comes to solving broken systems usually wanting either a) more rules, b) harsher punishments or c) to get rid of the system entirely. So, if there's a public outcry, it's usually calling attention to a real problem, but calling for an impractical solution.
Personally I don't think the current meta-systems we have in Western capitalist democracies are optimal. I'm sure that better can be done. However, I'm also very sure that we can't do better by building on idealistic principles - because of the massive changes that would entail and the corresponding unpredictability of the results.
It's interesting that libertarians and neo-marxists basically want a really extreme change in two directions that almost never happen. The rich tend to always get richer, the government tends to always get bigger in size and scope. I guess this is kind of like the "get rid of it entirely" mentality. Perhaps it would be better to seek out changes to our current system that would allow iterations towards reducing income disparity and shrinking of the government.
Nice post, thanks for writing. It's interesting how we don't then focus (at least in the now secular West) on ways for morality to be seeped into our collective consciousness. The answer historically has always been religion (at least in the last few millenia) but today people get all jumpy when even trying to talk about it.
Yeah, antidote. I can't believe how sloppy my writing is these days.
I'm not sure how religion plays into things. You can see on the thread a very strong example of how modernism still dominates a lot of our thinking. We want to be able to place 'moral issues' into rational epistemological frameworks or political theories. I have the same instincts. Does game theory work here? Can we tweak & changing the legal definition of corporations or encouraging limited partnerships to avoid this or that pathology?
You're right that even talking about morality feels religious. I think its a mistake to avoid dealing with morality as an independent thing. These attempts to get morality from amorality are a dead end, I think. I don't accept that politicians are inevitably amoral slaves to political expediency. That's bullshit. Even the US, with it "conservative" streak of libertarian-rationalism was built by people of moral virtue or at the very least a mythology of those people. Fuzzy as it is, morality is a central part of being human.
I think you misunderstand the libertarian position. You can lobby a minimalist government all you like, it can't do anything to help you because it doesn't have the power or resources to do so.
The thing I find silly with libertarianism is that you're describing an ideal with a massive power vacuum - one that will be filled. So government is minimalist and ineffectual, in that case, they are a sham and the real "governance" is done by large corporations/wealthy behind closed doors.
Similar downsides as a large central government, but you have no vote unless you're massively wealthy/powerful.
By 'constraining and defunding the government' I basically mean smaller government in budget & scope/mandate. Is that different from your definition of minimalist?
>Ensure top level management has skin in the game (e.g. partnership).
It's better than nothing, sure. But let's not forget that over half of Richard Fuld's wealth was in Lehman Brothers stock. It didn't prevent him from flying his company straight into the ground.
That kind of behavior is strangely similar to the kind of behavior that corrupts the public sector and government.
Isn't it ironic that we are so quick to bring about arguments regarding the inefficiencies of command economies and big governments due to corruption, bureaucracy, and misalignment of government worker incentives, when our economy is made up of these same smaller scale command economies exhibiting these same problems and inefficiencies?
Great post. It really comes down to morality and "cooperation" in the sense of the Prisoner's Dilemma. The more people see defectors succeed, the more likely they are to defect themselves. But once enough people are defecting, almost everyone is worse off.
This public policy guy looks like a perfect candidate to be a head of a federal department one day. Degree from top university, impressive resume, wide experience... Imagine he's in charge of your healthcare, or education, or nuclear industry, or national security.
There is a fantastic web series called the "Gervais Principle: The Office According to The Office.(1)" The central thesis is that organizations have three types of employees: Sociopaths at the top. Clueless in the middle. Losers on the bottom. The defining characteristic of the Sociopaths elite is 'the basic heads-I-win-tails-you-lose pattern behind all Sociopath machinations.'
Strangely, this fun piece changed how I look at moral hazard. Before, I thought of it mostly as an inevitable consequence of badly designed systems. Similar to employee reward schemes that employees learn to manipulate in a way that is detrimental to the business or cobra bounties intended to reduce the number of snakes in a city leading to backyard cobra farms. I thought it was less of a problem in emergent, evolved human systems. After reading this piece, I think of it differently. It's a human psychology thing. Human systems have assumptions about fair play built into them. Psychologically perverse human (sociopaths) can take advantage of these assumptions.
I knew a guy from Uni. He was doing a Research Masters, badly. He failed. Resubmitted. Appealed. Rewrote. Appealed, and eventually got a pass. He wasn't really smart enough and he never understood the subject. But, universities (especially graduate programs) don't really fail students. Students drop out, especially the ones who struggle. Persistence can substitute for smarts and he walked away with Masters from a top uni. I later learned that he got into the Uni in the fist place with some hack. He applied to a second tier uni, and used some clever clauses to transfer to the better one without having to go through the front door admissions process.
We then proceeded to hound every big name in the field for references. None of them wanted to read this pile of crap but again persistence can substitute for quality and he eventually got some nice references and quotes. He used these along with a professionally written resume and application letters ($450) to land a sought after graduate job. Big government body that takes at least 12 months to fire someone. Excellent entry into the world of public policy research. Before that happened, he had found a recruiter who specializes in these guys. He moves jobs every 6-12 months which means a recurring bounty for the recruiter and a raise for the sociopath. On getting a job they immediately started planning for the next one, like a heist. He was so confident. It took a remarkably long time for bosses & coworkers to figure out exactly how completely useless he was. These people are rare. The systems assume they don't exist and sociopaths take advantage of that weakness.
Around the same time I was hearing the story of a largish company (~$0.5b) bought by a hedge fund and placed in the hands of an 'A-Player' team of senior executives. I knew a manager and long term (20yrs) employee there who narrated the story for me over a couple of years. Outsiders to the company and industry, making salaries an order of magnitude higher than the previous executive team. They proceeded to ruin the company. But, they managed to conceal a lot of that. Milking or liquidating assets. SUing something like a leveraged buyout to raise massive funds to acquire companies. The owners eventually figured it out. The abysmal CEO and his team of executives were paid to leave and every one landed a CEO/COO job at an even bigger company. If they had happened on a win, I'm sure they would have done even better.
After reading 'The Gervais Principle' I saw a bunch of examples of these 'sociopathic machinations' that are much more available to the average person. Just about any mid level manager at any company can take big risks. Draw up massive plans for expensive expansions. Pitch advertising campaigns. Offer to take on impossible product development. Propose to open a new office in another country. Promise huge benefits and demand huge resources. Say you are 100% sure it will work. You might get what you ask for. If you get it, it starts. If you don't, whats the worst that can happen? Maybe your proposals get shot down by a sane senior. That might hurt your status marginally. No big deal. You're more likely to walk away with a small win for initiative. If you get the resources and proceed to balls up your project, you can always quit. If you are in a position to find a similar job, thats not a big loss. If you hit the eject button at the appropriate time and still walk away with a win. You can time it so you are negotiating to be poached while running this doomed program. If it all works out, you are the author of a coup.
A normal company is not resilient to this type of attack because its pretty rare and it's not usually identified if it does happen. Companies are based on an assumption that people aren't like that. The normal deal is that the company takes on the relatively small risk of employees being unproductive. In exchange they take all the upside. A sociopath flips it, taking on massive risk the company is on the hook for and capture some of the rewards. They're own risk is capped at a pretty low level so the more reward the better, regardless of risk. In an extreme case they might lose their job and take a reputation hit, but most likely they'll just leave and get a higher salary elsewhere. It's like taking money out of a company account to the casino. If you double it, you keep the profit. The moral hazard is always there. Most people just aren't corrupted by it.
Moral hazard is the right word for this. Morality is what prevents it in normal circumstances. Sociopath is a bad word for it, because it implies some sort of born pathology. That probably exists, but more often it is a learned behavior. Human morality is absorbed from the environment. If you get a bunch of these sociopaths together, they will rub off on other people. That's how sociopaths breed. It's like a cancerous meme.
I don't have suggestions for dealing with it.
(1) http://www.ribbonfarm.com/the-gervais-principle/