Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Absolutely correct. In essence, everyone simply looks at the winners and then redefines the 'path to success' to be 'the path they took'. That sounds ridiculous but it's what many arguments boil down to.

There are some excellent resources out there for people to understand more about this. There's an article on Survivorship bias [1] and I was recently at a talk by Duncan Watts on the 'The Myth of Common Sense', which considers questions such as 'Why is the Mona Lisa the most famous painting in the world?'. I found a video of one of his earlier talks [2].

[1] http://youarenotsosmart.com/2013/05/23/survivorship-bias/

[2] http://www.youtube.com/watch?v=EF8tdXwa-AE



> In essence, everyone simply looks at the winners and then redefines the 'path to success' to be 'the path they took'.

The classic example is Jim Collins' bestselling business book, Good to Great. The book claimed that certain characteristics of successful companies made them Great.

Inconveniently for Collins, after the book came out, these very same companies underperformed and went from Great to Good.

Of course, he did the statistics backwards. He started with the successful companies and looked for common traits. He should've started with the traits and evaluated how companies with-and-without those traits performed.


> Of course, he did the statistics backwards. He started with the successful companies and looked for common traits. He should've started with the traits and evaluated how companies with-and-without those traits performed.

I don't know if this would give useful insight, and furthermore it's possible someone has already looked into it, but this way of posing the difference sounds like it might be related to the field of research into "one-class machine learning". The usual setup for classification in ML is that you have examples from all the classes. In a positive/negative two-class setting you need both positive and negative examples. But what if you really have a one-class dataset, e.g. just a list of failures and their characteristics? Can you (in a predictively reliable way) generalize anything from that, and predict solely from that class of positive examples whether future cases presented to the classifier are like those? There's quite a bit of work looking into that. (Unfortunately, I don't know much about it.)


> Absolutely correct. In essence, everyone simply looks at the winners and then redefines the 'path to success' to be 'the path they took'. That sounds ridiculous but it's what many arguments boil down to.

Honestly, I think its more often worse than that, in that what is actually presented as a path to success is an self-justifying mythology that has been created by those who have succeeded in the current system to explain their success, rather than teh actual path they've taken.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: