Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is fairly rare to see an ex-employee put a positive spin on their work experience.

I don't think this makes OpenAI special. It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

Look at it this way: the flip side of "incredibly bottoms-up" from this article is that there are people who feel rudderless because there is no roadmap or a thing carved out for them to own. Similarly, the flip side of "strong bias to action" and "changes direction on a dime" is that everything is chaotic and there's no consistent vision from the executives.

This cracked me up a bit, though: "As often as OpenAI is maligned in the press, everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things. It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!



I would never post any criticism of an employer in public. It can only harm my own career (just as being positive can only help it).

Given how vengeful Altman can reportedly be, this goes double for OpenAI. This guy even says they scour social media!

Whether subconsciously or not, one purpose of this post is probably to help this guy’s own personal network along; to try and put his weirdly short 14-month stint in the best possible light. I think it all makes him look like a mark, which is desirable for employers, so I guess it is working.


Calvin cofounded Segment that had a $3.2B acquisition. He's not your typical employee.


So this guy is filthy rich and yet decided to grind for 14 months with a newborn at home?

I guess that's why he's filthy rich.


I had a chance to join OpenAI 13 months ago too.

But I had a son 14 months ago.

There was absolutely no way I was going to miss any of a critical part in my baby’s life in order to be in an office at 2am managing a bad deployment.

Maybe I gave up my chance at PPU or RSU riches. But I know I chose a different kind of wealth that can never be replaced.


Wow, ditto! I thought I was the only one who took an extended leave to watch their baby grow up. Totally worth it, and it was a wonderful experience being able to focus 100% on her.


My daughter was born in 2020, when my employer was going through big changes and the world around me was obviously in chaos. There were real opportunities to work long days and advance in our new parent company. Instead, I took every day of paternity leave that they'd let me have and tossed in some PTO for good measure. There's nothing like being able to spend all day learning your new baby.


You both 100% made the right choice. The number of apologists for terrible fathers in this thread explains a lot.


Way to go to keep the boring chores of the first months with the partner and join the fun when the little one starts to be more fun after a year. With all that cash, I'm sure they could buy a bunch of help for the partner too.


I don't know, when I became a parent I was in for the full ride, not to have someone else raising her. Yes, raising includes changing diapers and all that.


You make it sound like your choice is somehow the righteous one. I'm not convinced. What's wrong with a hiring help, as long as it's well selected? And anyway, usually the help would take care of various errands to free up mom so she can focus on her baby. But maybe they have happily involved grandparents. Maybe he was working part-time. Or maybe there's some other factor we're completely missing on right now.


So you sincerely think it’s ok that everybody takes care of the kid but the father because he’s rich and can afford multiple nannies? There’s not much context to miss when TFA has this:

> The Codex sprint was probably the hardest I've worked in nearly a decade. Most nights were up until 11 or midnight. Waking up to a newborn at 5:30 every morning. Heading to the office again at 7a. Working most weekends.


Does a household necessarily need multiple nannies to raise a baby? Grandparents might be willing to help and if there's some house help as well, no nannies might be needed at all, as long as the wife is happy with the arrangement, which I don't find impossible to entertain. Yeah, wealth allows for more freedom of choice, that's always been the case, but this type of arrangement is not unheard of across social classes.


A billionaire asking the grandparents for help with a newborn instead of spending some dollars for that help? C'mon, have you ever had a newborn?


>>free up no so she can focus on her baby

Their baby, I presume…not just hers.

Literally any excuse for the man to not be involved.


There are certain experiences in life that one needs to go through so you keep grounded to what really matters.


The people who will disagree with this statement would say, full throated, that what really mattered was shipping on time.

Couldn't be me. I do my work, then clock the fuck off, and I don't even have kids. I wasn't put upon this earth to write code or solve bugs, I just do that for the cash.


There is some parenting, then there is good parenting. Most people don't have this option due to finances, but those that do and still avoid it to pick up just easy and nice parts - I don't have much sympathy nor respect for them.

Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.

Plus it certainly helps the kid with bonding, emotional stability and keeps the parent more in touch emotionally with their own kid(s).


> Then later they even have the balls to complain how kids these days are unruly, never acknowledging massive gaps in their own care.

My favorite is ‘I can’t understand why my kid didn’t turn into a responsible adult!’

Cue look back on what opportunities the parent put them in to learn and practice those skills, over the last 20 years.


Yeah, or, let the partner have the easy period before they are mobile, and when they sleep half the day, and then join the fun when they can walk off into the craft supplies/pantry where sugar/flour/etc. are stored/the workshop with the power tools etc., and when they drop the naptime and instead start waking at 5am and asking you to play Roblox with them.

Either option is priceless :-)


I just went through this period.

I would not describe it as easy.


You do know that early bonding experiences of newborns are crucial for their lifelong development? It reads like satire, or, if serious, plain child maltreatment.


It’s obvious why the HN news community has downvoted this comment but you’re absolutely spot on.

This thread reads like all the excuses for emotional and actual abandonment of a mother and a newborn for man’s little work project.


Pushing it a bit there, aren't we?


“Child abuse or maltreatment constitutes all forms of physical and/or emotional ill-treatment, sexual abuse, neglect or negligent treatment or commercial or other exploitation, resulting in actual or potential harm to the child’s health, survival, development or dignity […] Neglect includes the failure to provide for the development of the child in all spheres: health, education, emotional development, nutrition, shelter and safe living conditions.”

Source: World Health Organization, Child maltreatment, Fact sheet, 2020 https://www.who.int/news-room/fact-sheets/detail/child-maltr...

“The term ‘child abuse and neglect’ means, at a minimum, any recent act or failure to act on the part of a parent or caretaker, which results in death, serious physical or emotional harm, sexual abuse or exploitation, or an act or failure to act which presents an imminent risk of serious harm. This includes emotional neglect such as “extreme or bizarre forms of punishment, deliberate cruelty or rejection, or the failure to provide the necessary psychological nurturing.”

Source: U.S. Department of Health and Human Services, Child Abuse Prevention and Treatment Act (CAPTA) https://acf.gov/cb/law-regulation/child-abuse-prevention-and...

Emotional neglect includes “acts of omission, such as the failure to provide developmentally appropriate affection, attention, or emotional support.”

Source: APSAC, Practice Guidelines: The Investigation and Determination of Suspected Psychological Maltreatment of Children and Adolescents, 2017 https://apsac.org/guidelines


Winston, R., & Chicot, R. (2016). The importance of early bonding on the long-term mental health and resilience of children. London journal of primary care, 8(1), 12–14. https://doi.org/10.1080/17571472.2015.1133012

Brown, G. L., Mangelsdorf, S. C., & Neff, C. (2012). Father involvement, paternal sensitivity, and father-child attachment security in the first 3 years. Journal of family psychology : JFP : journal of the Division of Family Psychology of the American Psychological Association (Division 43), 26(3), 421–430. https://doi.org/10.1037/a0027836

Deneault, A. A., Bakermans-Kranenburg, M. J., Groh, A. M., Fearon, P. R. M., & Madigan, S. (2021). Child-father attachment in early childhood and behavior problems: A meta-analysis. New directions for child and adolescent development, 2021(180), 43–66. https://doi.org/10.1002/cad.20434

Scism, A. R., & Cobb, R. L. (2017). Integrative Review of Factors and Interventions That Influence Early Father-Infant Bonding. Journal of obstetric, gynecologic, and neonatal nursing : JOGNN, 46(2), 163–170. https://doi.org/10.1016/j.jogn.2016.09.004

Jeong, J., Franchett, E. E., Ramos de Oliveira, C. V., Rehmani, K., & Yousafzai, A. K. (2021). Parenting interventions to promote early child development in the first three years of life: A global systematic review and meta-analysis. PLoS medicine, 18(5), e1003602. https://doi.org/10.1371/journal.pmed.1003602

Joas, J., & Möhler, E. (2021). Bonding in Early Infancy Predicts Childrens' Social Competences in Preschool Age. Frontiers in psychiatry, 12, 687535. https://doi.org/10.3389/fpsyt.2021.687535

Thümmler R, Engel E-M, Bartz J. Strengthening Emotional Development and Emotion Regulation in Childhood—As a Key Task in Early Childhood Education. International Journal of Environmental Research and Public Health. 2022; 19(7):3978. https://doi.org/10.3390/ijerph19073978


lots of wealthy families have dysfunctional internal emotional patterns.. A quick stat is that there is more alcoholism among the 1% wealthiest than the general population across the USA


Wow! Wanting to work hard at building cool things == dysfunctional internal emotional pattern

Sums up western workforce attitude and why immigrants continue to crush them


It's unlikely he sees or even perceives what he's doing as a grind, but rather something akin to an exciting and engrossing chase or puzzle. If my mental model of these kind of Silicon Valley types is correct, neither is he likely to be in it for the money, at least not at the narrative self level. He most likely was "feelin' the AGI", in Ilya Sutskever's immortal words. I.e. feeling like this might be a once-in-a-million-years opportunity to birth a new species, if not a deity even.


Which is a YC startup. If you know anything about YC it's the network of founders supporting each other no matter what.


> no matter what

except if you publicly speak in less than glowing terms their leaders


Some books do a good job of documenting the power struggles that happen behind closed doors, big egos backed by millions clashing over ideas and control.

Not gonna lie, the entire article reads more like a puff piece than an honest reflection. Feels like something went down on Slack, some doors got slammed, and this article is just trying to keep them unlocked. Because no matter how rich you are in the Valley, if you're not on good terms with Sam, a lot of doors will close. He's the prodigy son of the Valley, adopted by Bill Gates and Peter Thiel, and secretly admired by Elon Musk. With Paul Graham's help, he spent 10 years building an army of followers by mentoring them and giving them money. Most of them are now millionaires with influence. And now, even the most powerful people in tech and politics need him. Jensen Huang needs his models to sell servers. Trump needs his expertise to upgrade defence systems. I saw him shaking hands with an Arab sheikh the other day. The kind of handshake that says: with your money and my ambition, we can rule the world.


Why that's exactly what we desperately need - more "rule the world" egos!


That's even more of a reason not to bad mouth other billionaires/billion dollar companies. Billionaires and billion dollar companies work together all the time. It's not a massive pool. There is a reason beef between companies and top level execs and billionaires is all rumors and tea-talk until a lawsuit drops out of no where.

You think every billionaire is gonna be unhinged like Musk calling the president a pedo on twitter?


Hebephile or ephebophile rather than pedo to be precise. And we all saw how great friend he was with epstein for decades, frequent visitor to his parties, dancing together and so on. Not really a shocking statement, whether true or not.


He is still manipulatable and driven by incentive like anyone else.


What incentives? It's not a very intellectual opinion to give wild hypotheticals with nothing to go on other than "it's possible".


I am not trying to advance wild hypotheticals, but something about his behavior does not quite feel right to me. Someone who has enough money for multiple lifetimes, working like he's possessed, to launch a product minimally different than those at dozens of other companies, and leaving his wife with all the childcare, then leaving after 14 months and insisting he was not burnt out but without a clear next step, not even, "I want to enjoy raising my child".

His experience at OpenAI feels overly positive and saccharine, with a few shockingly naive comments that others have noted. I think there is obvious incentive. One reason for this is, he may be in burnout, but does not want to admit it. Another is, he is looking to the future: to keep options open for funding and connections if (when) he chooses to found again. He might be lonely and just want others in his life. Or to feel like he's working on something that "matters" in some way that his other company didn't.

I don't know at all what he's actually thinking. But the idea that he is resistant to incentives just because he has had a successful exit seems untrue. I know people who are as rich as he is, and they are not much different than me.


Calvin just worked like this when I was at Segment. He picked what he worked on and worked really intensely at it. People most often burn out because of the lack of agency, not hours worked.

Also, keep in mind that people aren't the same. What seems hard to you might be easy to others, vice versa.


> People most often burn out because of the lack of agency, not hours worked.


Why did Michael Jordan retire 3 times? Sure, you could probably write a book about it, but you would want to get to know the guy first.


first time in 93 because of burnout from three peat, and allegedly a gambling problem. second because of the lockout and krause pushing phil out. third because too old


Not sure if it's genuine insight or just a well-written bit of thoughtful PR.

I don't know if this happens to anyone else, but the more I read about OpenAI, the more I like Meta. And I deleted Facebook years ago.


i know calvin, and he's one of the most authentic people i've worked with in tech. this could not be more off the mark


This reflection seems very unlikely to be authentic because it is full of superlatives and not a single bad thing (or at least not great) is mentioned. Real organizations made of real humans simply are not like this.

The fact that several commenters know the author personally goes some way to explain why the entire comment section seems to have missed the utterly unbalanced nature of the article.


People come out to defend their bosses a lot on this site, convincing themselves they know the powerful people best, that they’re “friends”. How can someone be so confident that a founder is authentic, when a large part of their job is to make you believe so (regardless of whether they are), and the employee’s own self image push them to believe it too?


Some teams are bad, some teams are good.

I've always heard horror stories about Amazon, but when I speak to most people at, or from Amazon, they have great things to say. Some people are just optimists, too.


sounds exactly like a “typical employee”


>This guy even says they scour social media!

Every, and I mean every, technology company scours social media. Amazon has a team that monitors social media posts to make sure employees, their spouses, their friends don’t leak info, for example.


> There's no Bond villain at the helm. It's good people rationalizing things.

I worked for a few years at a company that made software for casinos, and this was absolutely not the case there. Casinos absolutely have fully shameless villains at the helm.


Interesting. A year ago I joined one of the larger online sportsbook/casinos. In terms of talent, employees are all over the map (both good and bad). But I have yet to meet a villain. Everyone here is doing the best they can.


Every villain wants to be the best villain they can be!

More seriously, everyone is the hero of their own story, no matter how obvious their failings are from the outside.

I’ve been burned by empathetically adopting someone’s worldview and only realizing later how messed up and self-serving it was.


I’m sure people working for cigarette companies are doing the best they can too. People can be good individuals and also work toward evil ends.


I am of the opinion that the greatest evils come from the most self-righteous.


That may very well be the case. But I think this is a distinct category of evil; the second one, in which you'll find most of the cigarette and gambling businesses, is that of evil caused by indifference.

"Yes, I agree there are some downsides to our product and there are some people suffering because of that - but no one is forcing them to buy from us, they're people with agency and free will, they can act as adults and choose not to buy. Now what is this talk about feedback loops and systemic effects? It's confusing, go away."

This category is where you'll also find most of the advertising business.

The self-righteous may be the source of the greatest evil by magnitude, but day-to-day, the indifferents make it up in volume.


It's not indifference, it's much more comically evil. Like, they're using software to identify gambling addicts on fixed incomes, to figure out how big retirees' social security checks are, and to ensure they lose the entire thing at the casino each week. They bonus out their marketing team for doing this successfully. They're using software to make sure that when a casino host's patron runs out of money and kills themselves, the casino host is not penalized but rewarded for a job well done.

At 8am every morning, the executives walk across the casino floor on their way to the board room, past the depressed people who have been there gambling by themselves the entire night, seeing their faces, then they go into a boardroom to strategize ways to get them those people to gamble even harder. They brag about it. It's absolute pure villainy.


I wouldn't know if this is a fair characterization of other companies, but it certainly isn't anything like what I observe here. If you can't name names, I'm going to guess you just made this up.


We had a few dozen customers, and "percent of wallet" (figuring out how much money they walk into the casino with vs. how much they leave with) is a standard metric in casino marketing everywhere. You can figure out their paycheck based on them coming the same day of the week and losing the same amount multiple times, and market to then to ensure they lose their whole paycheck more often.

It's trivially easy to spot gambling addicts in the data, and in markets with better protections for gambling addicts they have to approach marketing quite differently. In some places you're allowed to ban yourself from the casino, and it's super illegal for the casino to market to you, so there are tons of protections to prevent all emails, texts, phone calls from hosts, physical mailers, ads of any form from reaching you.

The suicide anecdote is what caused me to quit. I'm ashamed to admit I asked my team to use an "IsDeceased" flag in the calculation for host bonus compensation, for when a patron dies while assigned to them. After that, I tried to transfer to the non-casino corner of the business where they were trying to sell our software to sports stadiums, and when they killed that off a few months later, I left the company. This was circa 2016, at a casino in the rust belt, but I'm not going to get more specific than that.


I appreciate this comment. You will see that the modern day capitalistic system, in general, punishes anyone with even a smidgen of the moral compass you have. The world of finance is this in spades. Having worked on wall street for pretty much my entire adult career, having gone on to found my own fund. I came to an epiphany through a few fucked up experiences that my investors did not give two flying fucks what kind of person I was as long as I was generating solid returns. Moral compass be damned.

So, casino industry perhaps is a convenient pinata when in reality it's not the specific industry, it's the system.


Some people like to smoke. I find it disgusting myself, but as long as people want the experience I see no reason why someone else shouldn't be allowed to sell it to them. See also alcohol, drugs, porn, motorcycles, experimental aircraft, whatever.

We can have all sorts of interesting discussions about how to balance human independence with shared social costs, but it's not inherently "evil" to give consenting adults products and experiences they desire.

IMO, much more evil is caused by busybodies trying to tell other people what's good for them. See: The Drug War.


I disagree. The health burden from smoking is approximately the same as death toll as the sum of all in the Holocaust, but smoking does it every nine months. And 1.3 million/year of those are non-smokers who are dying because they are exposed to second-hand smoke: https://ourworldindata.org/smoking

Even when the self-righteous are at their most dangerous, they have to be self-righteous and in power, e.g.:

  Caedite eos. Novit enim Dominus qui sunt eius.
- https://en.wikipedia.org/wiki/Caedite_eos._Novit_enim_Dominu....

or:

  រក្សាន្នកគ្មានប្រយោជន៍ខាត។ បំផ្លាញអ្នកគ្មានការខាតបង់
- https://km.wikipedia.org/wiki/ប្រជាជនថ្មី


I think y'all are agreeing.


Nah this is lawful evil (I Am Following The Rules Therefore I'm Doing The Right Thing) vs. neutral evil (I Just Work Here).


More like Chaotic Neutral. I like a world full of novel things, and I don't moralize about it.


There are jobs in which one may find oneself where doing them poorly is better for the world than doing them well.

I think you and your colleagues should sit back and take it easy, maybe have a few beers every lunchtime, install some video games on the company PCs, anything you can get away with. Don't get fired (because then you'll be replaced by keen new hires), just do the minimum acceptable and feel good about that karma you're accumulating as a brake on evil.


> We are all very good and kind and not at all evil, trust us if we do say so ourselves

Do these people have even minimal self-awareness?


VGT?


> It is fairly rare to see an ex-employee put a positive spin on their work experience

Much more common for OpenAI, because you lose all your vested equity if you talk negatively about OpenAI after leaving.


Absolutely correct.

There is a reason why there was a cult-like behaviour on X amongst the employees in supporting to bringing back Sam as CEO when he was kicked out by the OpenAI board of directors at the time.

"OpenAI is nothing without it's people"

All of "AGI" (which actually was the lamborghinis, penthouses, villas and mansions for the employees) was all on the line and on hold if that equity went to 0 or would be denied selling their equity if they openly criticized OpenAI after they left.


Yes, and the reason for that is that employees at OpenAI believed (reasonably) that they were cruising for Google-scale windfall payouts from their equity over a relatively short time horizon, and that Altman and Brockman leaving OpenAI and landing at a well-funded competitor, coupled with OpenAI corporate management that publicly opposed commercialization of their technology, would torpedo those payouts.

I'd have sounded cult-like too under those conditions (but I also don't believe AGI is a thing, so would not have a countervailing cult belief system to weigh against that behavior).


> I also don't believe AGI is a thing

Why not? I don't think we're anywhere close, but there are no physical limitations I can see that prevent AGI.

It's not impossible in the same way our current understanding indicates FTL travel or time travel is.


I also believe that AGI is not a thing, but for different reasons. I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not. People also don't seem interested in justifying why humans would be GI but other animals with 99% of the same DNA aren't.

My main reason for thinking general intelligence is not a thing is similar to how Turing completeness is not a thing. You can conceptualize a Turing machine, but you can't actually build one for real. I think actual general intelligence would require an infinite brain.


> I notice that almost everybody seems to implicitly assume, without justification, that humans are a GI (general intelligence). I think it's easy to see that if we are not a GI, then we can't see what we're missing, so it will feel like we might be GI when we're really not.

That's actually a great point which I'd never heard before. I agree that it's very likely that us humans do not really have GI, but rather only the intelligence that evolved stochastically to better favour our existence and reproduction, with all its positive and negative spandrels[0]. We can call that human intelligence (HI).

However, even if our "general" intelligence is a mirage, surely what most people imagine when they talk about 'AGI' is actually AHI, as in an artificial intelligence that has the same characteristics as human intelligence that in their own hubris they believe is general. Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?

[0] https://en.wikipedia.org/wiki/Spandrel_(biology)


Yes, I do think that people usually mean AHI even they say AGI, although they don't realize it because when asked to define AGI they talk about generality and not about mimicking humans. (Meanwhile, when they talk about sentience and consciousness, they will usually only afford that to an artificial entity if it is exactly like a human, and often not even then.)

> Or are you making a harder argument, that human intelligence may not actually have the ability to create AHI?

I wasn't, but I've pondered it since you brought it up. No, I don't think it's impossible to create a greater intelligence than oneself — in fact, evolution has already done it by creating animals, including but not limited to humans. I used to think it was impossible when I pondered science fictional characters like Data from TNG, but modern LLMs show that we can create it without having to understand how it works. Data is depicted as having been engineered, but machine learning is closer to evolution than it is to engineering.


If we were to believe the embodiment theory of intelligence (it’s by far not the only one out there, but very influential and convincing), this means that building an AGI is an equivalent problem to building an artificial human. Not a puppet, not a mock, not “sorta human”, but real, fully embodied human, down to gut bacterial biome, because according to the embodiment theory, this affects intelligence too.

In this formulation, it’s pretty much as impossible as time travel, really.


Sure, if we redefine "AGI" to mean "literally cloning a human biologically", then AGI suddenly is a very different problem (mainly one of ethics, since creating human clones, educating, brainwashing, and forcing them to respond to chat messages ala chatGPT has a couple ethical issues along the way).

I don't see how claiming that intelligence is multi-faceted makes AGI (the A is 'artificial' remember) impossible.

Even if _human_ intelligence requires eating yogurt for your gut biome, that doesn't preclude an artificial copy that's good enough.

Like, a dog is very intelligent, a dog can fetch and shake hands because of years of breeding, training, and maybe from having a certain gut biome. Boston Dynamics did not have to understand a single cell of the dog's stomach lining in order to make dog-robots perfectly capable of fetching and shaking hands.

I get that you're saying "yes, we've fully mapped the neurons of a fruit fly and can accurately simulate and predict how a fruit fly's brain's neurons will activate, and can create statistical analysis of fruit-fly behavior that lets us accurately predict their action for much cheaper even without the brain scan, but human brains are unique in a way where it is impossible to make any sort of simulation or prediction or facsimile that is 'good enough' because you also need to first take some bacteria from one of peter thiel's blood boys and shove it in the computer, and if we don't then we can't even begin to make a facsimile of intelligence". I just don't buy it.


“AGI” isn’t a thing and never will be. It fails even really basic scrutiny. The objective function of a human being is to keep its biological body alive and reproduce. There is no such similar objective on which a ML algorithm can be trained. It’s frankly a stupid idea propagated by people with no meaningful connection to the field and no idea what the fuck they’re talking about.


We will look back on this and the early OpenAI employees (who sold) will speak out in documentaries and movies in a decades time and they will admit that "AGI" was a period of easy dumb money.


The Silenced No More Act" (SB 331), effective January 1, 2022, in California, where OpenAI is based, limits non-disparagement clauses and retribution by employers, likely making that illegal in California, but I am not a lawyer.


Even if it's illegal, you'll have to fight them in court.

OpenAI will certainly punish you for this and most likely make an example out of you, regardless of the outcome.

The goal is corporate punishment, not the rule of the law.


OpenAI never enforced this, removed it, and admitted it was a big mistake. I work at OpenAI and I'm disappointed it happened but am glad they fixed it. It's no longer hanging over anyone's head, so it's probably inaccurate to suggest that Calvin's post is positive because he's trying to protect his equity from being taken. (though of course you could argue that everyone is biased to be positive about companies they own equity in, generally)


> It's no longer hanging over anyone's head,

The tender offer limitations still are, last I heard.

Sure, maybe OA can no longer cancel your vested equity for $0... but how valuable is (non-dividend-paying) equity you can't sell? (How do you even borrow against it, say?)


Nope, happy to report that was also fixed.

(It would be a pretty fake solution if equity cancellation was halted, but equity could still be frozen. Cancelled and frozen are de facto identical until the first dividend payment, which could take decades.)


So OA PPUs can now be sold and transferred without restriction to arbitrary buyers, outside the tender offer windows?


No, that's still the same.


Then how was that "fixed"?


Maybe I misinterpreted "can't sell" - I thought the implication was that even if they said they wouldn't cancel equity outright, they could still exercise power to freeze it out of tender offers, which would have a similar chilling effect. By "fixed" I meant to clarify that not only will they not cancel equity, there's no loophole where they'd specifically freeze it out of participating in tender offers.


> there's no loophole where they'd specifically freeze it out of participating in tender offers.

Again, who said anything about a 'specific loophole'? Needing permission to participate in a tender (which is the only way to sell) is not a 'loophole', and the threat is always there on the table. So again: how was that 'fixed'? Should I interpret your comment as implying that the tender threat of being frozen out is not fixed?

Certainly your fellow OA employee in the other comment doesn't seem to think that it's not on the table, because he is arguing that the threat is fine and harmless because it's never been exercised, which would seem to imply that it's still there...


Also work at OpenAI. Every tender offer has made full payouts to previous employees. Sorry to ruin your witch hunt..


I think the fact that you consider that a defense is a good illustration of why I had to ask that question. ("Yes, the gun is on the table, but the trigger has never been pulled. Sorry to ruin your witch hunt.")


Here's what I think - while Altman was busy trying to convince the public the AGI was coming in the next two weeks, with vague tales that were equaly ominous and utopistic, he (and his fellow leaders) have been extremely busy at trying hard to turn OpenAI into a product company with some killer offerings, and from the article, it seems they were rather good and successful in that.

Considering the high stakes, money, and undoubtedly the ego involved, the writer might have acquired a few bruises along the way, or might have lost out on some political in fights (remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own).

Another possible explanation is that the writer's just had enough - enough money to last a lifetime, just started a family, made his mark on the world, and was no longer compelled (or have been able to) keep up with methed-up fresh college grads.


> remember how they mentioned they built multiple Codex prototypes, it must've sucked to see some other people's version chosen instead of your own

Well it depends on people’s mindset. It’s like doing a hackathon and not winning. Most people still leave inspired by what they have seen other people building, and can’t wait to do it again.

…but of course not everybody likes to go to hackathons


> OpenAI is perhaps the most frighteningly ambitious org I've ever seen.

That kind of ambition feels like the result of Bill Gates pushing Altman to the limit and Altman rising to the challenge. The famous "Gates demo" during the GPT‑2 days comes to mind.

Having said that, the entire article reads more like a puff piece than an honest reflection.


> There's no Bond villain at the helm

We're talking about Sam Altman here, right, the dude behind Worldcoin? A literal bond-villainesque biological data harvesting scheme?


It might be one of the cover stories for a Bond villain, but they have lots of mundane cover stories. Which isn't to say you're wrong, I've learned not to trust my gut in the category (rich business leaders) to which he belongs.

I'd be more worried about the guy who tweeted “If this works, I’m treating myself to a volcano lair. It’s time.” and more recently wore a custom T-shirt that implies he's like Vito Corleone.


> I'd be more worried about the guy

Or you could realize what those guys all have in common and be worried about the systems that enable them because the problem isn't a guy but a system enabling those guys to become everyone's problem.

I don't mind "Vito Corleone" joking about a volcano lair. I mind him interfering in my country's elections and politics. I shouldn't have to worry about the antics of a guy building rockets that explode and cars that can chop off your fingers because I live in a country that can protect me from those things becoming my problem, but because we have the same underlying systems I do have to worry about him because his political power is easily transferrable to any other country including mine.

This would still be true if it were a different guy. Heck, Thiel is landing contracts with his surveillance tech in my country despite the foreign politics of the US making it an obvious national and economic security risk and don't get me started on Bezos - there's plenty of "guys" already.


Sure, but "the systems" were built by such people and are mere evolutions of the previous "I have a bigger stick" of power politics from prior to the industrial revolution.

Not that you're wrong about the systems, just that if it was as easy as changing these systems because we can tell they're bad and allow corruption, the Enlightenment wouldn't have managed to mess up with both Smith and Marx.


I don't think it's accurate to say that anyone messed up with Smith or Marx. Smith didn't anticipate modern finance capitalism and his musings apply perfectly well to earlier iterations of capitalism - though he'd probably have had a stroke if you showed him the finance economy. Marx didn't anticipate capitalism's resilience but he had very little to do with the ideologies built on his work let alone their implementations (or attempts thereof).

That said, "I have a bigger stick" wasn't all we had before the present systems. I'm not a primitivist but I think it's a thought-terminating cliché to just look at what we have now and what came immediately before and decide that the local plateau is the best we can have.

Humanity invests a ton of resources in enforcing the status quo of power dynamics - both through overt force (be it military violence or the mere threat of violence necessary to assert contracts and claims to private property) and through more subtle means (e.g. narrative framing in education, news media and entertainment). Maintaining these systems takes immense resources and effort. But in moments of crisis our cooperative human nature can shine through until order is restored and we are ushered back into learned helplessness and mutual distrust as "the authorities" take over.

The problem isn't that the systems "allow corruption". The systems are inherently bad and corrupting. We build hierarchies of absolute power and then try to come up with solutions for the problems those hierarchies cause in the first place.

The problem isn't who rules. The problem is having rulers. Quoth Bakunin: "the people will feel no better if the stick with which they are being beaten is labelled the 'peoples stick'. [..] not even the reddest republic - can ever give the people what they really want."


There is lots of rationalizing going on in his article.

> I returned early from my paternity leave to help participate in the Codex launch.

10 years from now, the significance of having participated in that launch will be ridiculously small (unless you tell yourself that it was a pivotal moment of your life, even if it objectively wasn't) versus those first weeks with your newborn will never come back. Kudos to your partner though.


The very fact that he did this exemplifies everything that is wrong about the tech industry and our current society. He's praising himself for this instead of showing remorse for his failure as a parent.


Failure as a parent and a partner. Pregnancy and childbirth is traumatic both physically and emotionally. Basically abandoning your partner to deal with that alone is diabolical.


Odd take. Openai gives 5 months of paternity leave and author is independently wealthy. What difference does it make between spending more time with a 4 month old vs a 4 year old? Or is your prescription that people should just be retiring once they have children?


> It is fairly rare to see an ex-employee put a positive spin on their work experience.

The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.

I was at a company that turned into the most toxic place I had ever worked due to a CEO who decided to randomly get involved with projects, yell at people, and even fire some people on the spot.

Yet a lot of people wrote glowing stories about their time at the company on blogs or LinkedIn because it was beneficial for their future job search.

> It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.

For the posts that make HN I rarely see it that way. The recent trend is for passionate employees who really wanted to make a company work to lament how sad it was that the company or department was failing.


> The opposite is true: Most ex-employee stories are overly positive and avoid anything negative. They’re just not shared widely because they’re not interesting most of the time.

Yeah I had to re-read the sentence.

The positive "Farewell" post is indeed the norm. Especially so from well known, top level people in a company.


> It is fairly rare to see an ex-employee put a positive spin on their work experience.

Sure, but this bit really makes me wonder if I'd like to see what the writer is prepared to do to other people to get to his payday:

"Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI"


Well, as a reminder OpenAI has a non disparagement clause in their contracts, so the only thing you'll ever see from former employees is positive feedback.


I’m not saying this about OpenAI, because I just don’t know. But Bond villains exist.

Usually the level 1 people are just motivated by power and money to an unhealthy degree. The worst are true believers in something. Even something seemingly mild.


Allow me to propose a different rationalization: "yes I know X might damage some people/society, but it was not me who decided, and I get lots of money to do it, which someone else would do if not me."

I don't think people who work on products that spy on people, create addiction or worse are as naïve as you portrayed them.


> It is fairly rare to see an ex-employee put a positive spin on their work experience.

FWIW, I have positive experiences about many of my former employers. Not all of them, but many of them.


Same here. If I wrote an honest piece about my last employer, it would sound very similar in tone to what was written in this article


> everyone I met there is actually trying to do the right thing" - yes! That's true at almost every company that ends up making morally questionable decisions!

The operative word is “trying”. You can “try” to do the right thing but find yourself restricted by various constraints. If an employee actually did the right thing (e.g. publish the weights of all their models, or shed light on how they were trained and on what), they get fired. If the CEO or similarly high-ranking exec actually did the right thing, the company would lose out on profits. So, rationalization is all they can do. “I'm trying to do the right thing, but.” “People don't see the big picture because they're not CEOs and don't understand the constraints.”


> It goes like this: we're the good guys. If we were evil, we could be doing things so much worse than X! Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

This is a great insight. But if we think a bit deeper about why that happens, I land on because there is nobody forcing anyone to do the right thing. Our governments and laws are geared more towards preventing people from doing the wrong thing, which of course can only be identified once someone has done the wrong thing and we can see the consequences and prove that it was indeed the wrong thing. Sometimes we fail to even do that.


We already have bad guys doing X right now (literally, not the placeholders variable)


> It is fairly rare to see an ex-employee put a positive spin on their work experience.

I liked my jobs and bosses!


Most posts of the form "Reflections on [Former Employer]" on HN are positive.


I agree with your points here, but I feel the need to address the final bit. This is not aimed personally at you, but at the pattern you described - specifically, at how it's all too often abused:

> Sure, some might object to X, but they miss the big picture: X is going to indirectly benefit the society because we're going to put the resulting money and power to good use. Without us, you could have the bad guys doing X instead!

Those are the easy cases, and correspondingly, you don't see much of those - or at least few are paying attention to companies talking like that. This is distinct from saying "X is going to directly benefit the society, and we're merely charging for it as fair compensation of our efforts, much like a baker charges you for the bread" or variants of it.

This is much closer to what most tech companies try to argue, and the distinction seems to escape a lot of otherwise seemingly sharp people. In threads like this, I surprisingly often end up defending tech companies against such strawmen - because come on, if we want positive change, then making up a simpler but baseless problem, calling it out, and declaring victory, isn't helping to improve anything (but it sure does drive engagement on-line, making advertisers happy; a big part of why press does this too on a routine basis).

And yes, this applies to this specific case of OpenAI as well. They're not claiming "LLMs are going to indirectly benefit the society because we're going to get rich off them, and then use that money to fund lots of nice things". They're just saying, "here, look at ChatGPT, we believe you'll find it useful, and we want to keep doing R&D in this direction, because we think it'll directly benefit society". They may be wrong about it, or they may even knowingly lie about those benefits - but this is not trickle-down economics v2.0, SaaS edition.


> That's true at almost every company that ends up making morally questionable decisions! There's no Bond villain at the helm. It's good people rationalizing things

I mean, that's a leap. There could be a bond villain that sets up incentives such that people who rationalize the way they want is who gets promoted / their voice amplified. Just because individual workers generally seem like they're trying to do the best thing doesn't mean the organization is set up specifically and intentionally to make certain kinds of "shady" decisions.


  > It's just a good reminder that the overwhelming majority of "why I left" posts are basically trying to justify why a person wasn't a good fit for an organization by blaming it squarely on the organization.
It's also a performance art to acquire attention




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: