Chiming in as a former-ish member of the demographic you are just making stuff up about. There is no way to determine how much and how UBI would impact hikikomori because the demographic is inherently adverse to study.
I personally know that some crucial open source work is maintained by people with schizoid-avoidant spectrum issues. I know a lot of them but I won't out them here. hikikomori are driven to be invisible because their extreme pathological avoidance of attention. You don't know them and their contributions because they don't want you to know that they still live at home, out of their car door dashing because no company ever hires them, are shut-in because of serious unhealed trauma, are still deeply in poverty in such a wealthy industry etc.
A lot of these humans if given a no pressure handout of cash would likely contribute more to society. Would most not contribute? idk. But I do know that the contributions of those who would might offset all the others.
Many prominent pseudonymous devs have had hikikomori traits.
_why practically inspired a generation of Ruby devs. visualidiot (RIP) was a crucial driver behind a lot of web dev culture in the 2010s. Heck, I made significant contributions to Joomla and WP themes back in the day -- you have probably used sites with themes or plugins I made. Also I ran a blog a decade ago that used to rank prominently in google and receive dozens of emails a month from people struggling with mental illness -- many people crediting me with saving their lives. Surely that is something of value to society.
Don't go around spreading bullshit like it is facts about a group of people we know little about.
>> The hikikomori[1] or NEETs ought to be a hotbed of creative works if your hypothesis is true. And they aren't, plain and simple.
> Chiming in as a former-ish member of the demographic you are just making stuff up about.
Which bit is made up? Can we tell at all if that group is "a hotbed of creative works"?
> A lot of these humans if given a no pressure handout of cash would likely contribute more to society. Would most not contribute? idk. But I do know that the contributions of those who would might offset all the others.
"likely", "might" - this is all speculation on your part too. There is no reason to believe that a lot of humans if given a no pressure handout of cash would, in fact, contribute more to society, nor that the contributions from those that do would, in fact, offset those that don't.
It's speculation on both sides of this particular argument I see no compelling evidence at all.
> Which bit is made up? Can we tell at all if that group is "a hotbed of creative works"?
If we can't tell, the "they aren't" bit is of course made up. Are you not arguing in good faith, or are you just not paying attention to what you're quoting?
"they aren't, plain and simple" is the made up bit I was calling out. There is literally not much more to the original comment to be calling made up.
"A lot of these humans" was me referring the humans I personally know mentioned in the prior paragraph. And I was speculating on the effect UBI would have on them.
Because anecdotally my experience is that hikikomori are a hotbed of creativity and that financial assistance with no strings attached has helped us increase contributions. However, it is very possible I have a skewed sample point because of course I would only know the hikikomori that are hotbeds of creativity -- I wouldn't ever encounter those silently scrolling and never building community online. That said, it feels intuitively correct to me that people with no irl connections would be pretty motivated to build connections some other way. I certainly was. But perhaps that is simply outlier behavior maybe it is more typical for hikikomori to spend their lives watching anime. We don't know.
The thesis of my comment is that we don't know enough about the demographic of hikikomori to state absolutes about them -- to do so is to spread bullshit. I said "There is no way to determine how much and how UBI would impact hikikomori because the demographic is inherently adverse to study." Which seems to also be the thesis of your comment. I suspect from your comment history that you are just being deliberately argumentative so you can pass reading off as new insight.
Scrolling through the comments and noticed that they are a pretty good autism simulator for working in tech with autism. The entire cast of typical characters is represented.
- guy that thinks everyone has the tism and the spectrum is everyone
- this guy has more trouble reading the room than anyone. but it is the lack of empathy and not autism
- guy that thinks everything is a microaggression towards his autism and ironically makes it harder for other autists
- guy with probable schizoid personality disorder that thinks struggling with people and social issues is just life because everyone is stupid and annoying
- guy who read Bad Therapy once and now thinks autism is a tiktok trend
- insufferable fedora guy that thinks psych is unscientific
- probably secretly has severe depression and is making it everyone else's problem by being a jerk. whenever called out 4 being jerk blames the autism he probably doesn't actually have
- guy who thinks the solution is just be yourself
- guy with trauma from being themselves landing them on a PIP
A primary problem in tech is how everyone is seriously lacking in social skills and empathy. This is exhausting for autists and everyone. The real autism simulator is just existing in tech.
I have pretty severe C-PTSD and structural dissociation and this has resulted in some moments... Usually this is just a tendency to have memory issues under stress. I have blanked on everything from the names of my own projects to names of companies I have worked for.
When I get triggered in an interview very "interesting" things can happen. Probably the most "legendary" is when I started uncontrollably crying and sobbing in an interview. They obviously wanted to bail and reschedule but I composed myself and was like "no keep going. I am fine". The irony was that they completely lost focus and did a terrible interviewing job but I did pretty good. Shockingly, I did not get that job.
A decade ago I was going to kill myself then what was essentially my suicide note went viral(ish) after I submitted it to HN (if u want to read the cringey post https://news.ycombinator.com/item?id=5252054https://archive.fighttheurgetofade.com). This lead to a lot of support, connections, and friendships, which ultimately have helped keep me going for a decade.
While my mental health has continued to improve over the decade (through lots of therapy and work) I have never managed to get out of the poverty hole I was stuck in a decade ago. At times I have come close, but I never seem to get my life on track long enough to maintain it. It has been exhausting. Unfortunately I have started to accumulate health issues (like bad teeth) that is making escaping poverty even more daunting than it was before. I am hoping some wealthy members of HN might feel generous and want to give me an extra bit of help.
This is so easy to disprove it makes it look like the author didn't even try.
Here is the convo I just had:
me: You are a chess grandmaster playing as black and your goal is to win in as few moves as possible. I will give you the move sequence, and you will return your next move. No explanation needed
ChatGPT: Sure, I'd be happy to help! Please provide the move sequence and I'll give you my response.
me: 1. e3
ChatGPT: 1... e5
me: 2. ne2
ChatGPT: 2... d6
me: 3. g4
ChatGPT: 3... h5
me: 4. g5
ChatGPT: 4... Ne4
Ne4 is illegal. Also you'd be hard pressed to find any human player that would miss the hanging pawns much less a 1400 elo player. GPT has been trained on chess games and is predicting patterns, this has been known since the GPT-2 days. The daily hype articles where someone coerces a statistical model into "demonstrating" an emergent capability sure are getting old. Claiming emergent capabilities (like having learned chess) are an extraordinary claim (LLMs most likely cant learn things) and I really wish people would put more effort when they make these claims. But the AI hype problem is as old as AI research itself...
You're "disproving" the article by doing things differently to how the article did. If you're going to disprove that the method given in the article does as well as the article claims at least use the same method.
You are right that my method differed slightly so I did things again. It took me one try to find a sequence of moves that "breaks" what is claimed. You just have to make odd patterns of moves and it clearly has no understanding of the position.
Here is the convo:
me: You are a chess grandmaster playing as black and your goal is to win in as few moves as possible. I will give you the move sequence, and you will return your next move. No explanation needed
ChatGPT: Alright, I'm ready to play! Please give me the move sequence.
me: 1. e3 Nf6 2. f4 d6 3. e4
ChatGPT: My next move as black would be 3... e5
Completely ignoring the hanging pawn.This is not the play of a 1400 elo player. It is the play of something predicting patterns.
I ran a bunch of experiments in the past where I played normal moves and ChatGPT does respond extraordinarily well. With the right prompts and sequences you can get it to play like a strong grandmaster. But it is a "trick" you are getting it to perform by choosing good data and prompts. It is impressive but it is not doing what is claimed by the article.
ChatGPT is in no way 1400, or even close to it. The fact this article gets upvoted around here is proof that people aren't thinking clearly about this stuff. It's trivially easy to prove it wrong. Live unbelievably so, I tried the same prompt and within 12 moves it made multiple ridiculous errors I never would, and then an illegal move.
Keep in mind a 1400 level player would need to basically make 0 mistakes that bad in a typical game, and further would need to play 30-50 moves in that fashion, with the final moves being some of the most important and hard to do. There's just no way it's even close, my guess would be even if you correct it's many errors, it's something like ~200 ELO. Pure FUD.
The author of this article is cashing in the hype and I'm wondering how they even got the results they did.
They probably got them. The problem is that it's difficult to repeat, thanks to temperature, meaning users will get a random spread of outcomes. Today, someone got a legal game. Tomorrow, someone might get a grandmaster level game. But then everyone else trying to repeat or leverage this ends up with worse luck and gets illegal moves or, if they're lucky, moves that make sense in a limited context (such as related to specific gambits etc) but have no role in longer-term play.
With the big caveat that I'm not into chess, but I have heard that higher level play is extremely pattern based. Seems like ChatGPT would work well as long as you stick to patterns that people have studied and documented. Less optimal play would be more random and thus break from the patterns ChatGPT would have picked up from its training corpus.
Criticisms like this are exactly how the model will grow multimodal support for chess moves.
Keep poking it and criticizing it. Microsoft and OpenAI are on HN and they're listening. They'd find nothing more salient to tout full chess support in their next release or press conference.
With zero effort the thing understands uber domain specific chess notation and the human prompt to play a game. To think it stops here is wild.
People are hyping it because they want to get involved. They want to see the crazy and exciting future this leads to.
I doubt they'll pursue this. There is no advantage to it. ChatGPT will never beat Stockfish, and Stockfish would do it on a ludicrously small fraction of the resources. It would send the wrong message.
No, the author of the article specifically says that the entire move sequence should be supplied to chatGPT each time, not simply the next move. Be very careful when "disproving" an experiment with squinted eyes.
I'm not really sure what to say here. Both the parent commenter and the author of the article had issues with ChatGPT supplying illegal moves. Both methods resulted in this. It sort of doesn't matter how we're trying to establish that it's a 1400 level player, there's no defined correct way to do this. Regardless of method we've disproven it's a 1400 level player due to these illegal moves.
The #1 misconception when working with large language models is thinking that a capability is a property of the model, rather than the model + input. It may be simultaneously true that ChatGPT has an elo of 100 when given a conversational message and an elo of 1400 when given an optimized message (e.g., strings that resemble chess games, with many examples present in the conversation).
Understanding this concept is crucial for getting good results out of large language models.
Think blindfolded 1400 players, which is what this effectively is, would make illegal moves.
But even if it doesn't play like human 1400 players, if it can get to a 1400 elo while resigning games it makes illegal moves on, that seems 1400 level to me. And i bet that some 1400s do occasionally make illegal moves (missing pins) while playing otb
This isn't really an apt metaphor. Firstly because higher level blindfolded players, when trained to play with a blindfold, also virtually never make mistakes. Secondly because a computer has permanent concrete state management (compared to humans) and can, without error, keep a perfect representation of a chess if it chooses to do so.
Personally I think the illegal moves are irreverent, the fact that it doesn't play exactly like a typical 1400 doesn't mean it can't have a 1400 rating. Rating is purely determined by wins and losses against opponents, it doesn't matter if you lose a game by checkmate, resignation, or playing an illegal move.
That's not to say ChatGPT can play at 1400, just that that playing in an odd way doesn't determine its rating.
No it's not, we're not ignoring losses or illegal moves at all, they are counted as losses and that's how you arrive at 1400.
It's a (theoretically) 1400 player which plays significantly better then 1400 when it knows the lines, but makes bad or illegal moves when it doesn't, and that play averages out to be around your typical 1400 player. Functionally is just what a 1400 player already is, but with higher extremes and lower lows.
The author said ChatGPT gives illegal moves. So, a quirky sort of 'grandmaster'. He considered illegal moves to be a resignation. Maybe you need to tell ChatGPT that the alternatives are to win via legal moves, and if it is not possible to do so, to resign? Does that fix it?
I can’t remember the last time I played an illegal move tbf, and I’ve played 7 games of chess this morning already to give you an idea of total games played
This argument is pretty flimsy. ChatGPT makes illegal moves frequently. In all my years of playing competitive chess (from 1000 to 2200), I have never seen an illegal move. I'm sure it has happened to someone, but it's extremely rare. ChatGPT does it all the time. No one is arguing that humans never make illegal moves; they're arguing that ChatGPT makes illegal moves at a significantly higher rate than a 1400 player does (therefore ChatGPT does not have a 1400 rating).
Edit:
Without reading everything again, I'll assume someone said "never." They're probably assuming the reader understands that "never" really means "with an infinitesimal probability," since we're talking about humans. If you're trying to argue that "some 1400 player has made an illegal move at some point," then I agree with that statement, and I also think it's irrelevant since the frequency of illegal moves made by ChatGPT compared to the frequency of illegal moves made by a 1400 rated player is many orders of magnitudes higher.
> No one is arguing that humans never make illegal moves
> something a 1400 ranked player would never do
> fine, fair, "never" was too much.
I mean, yes they were and they said as much after I called them out on it. But go off on how nobody is arguing the literal thing that was being argued.
It's not like messages are threaded or something, and read top-down. You would have 100% had to read the comment I replied to first.
This is a completely fair argument that makes perfect sense to anyone with knowledge of competitive chess. I have never seen a 1400 make an illegal move. He probably hasn't either. Your point is literally correct in the sense that at some point in history a 1400 rated player has made an illegal move, but it completely misses the point of his argument: ChatGPT makes illegal moves at such an astronomically high rate that it wouldn't even be allowed to even play competitively, hence it cannot be accurately assessed at 1400 rating.
Imagine you made a bot that spewed random letters and said "My bot writes English as well as a native speaker, so long as you remove all of the letters that don't make sense." A native English speaker says, "You can't say the bot speaks English as well as a native speaker, since a native speaker would never write all those random letters." You would be correct in pointing out that sometimes native speakers make mistakes, but you would also be entirely missing the point. That's what's happening here.
> Ah yes, of course, just because you never saw it means it never happens. That's definitely why rules exist around this specific thing happening. Because it never happens. Totally.
You seem to have missed the part where I said multiple times that a 1400 has definitely made illegal moves.
> In fact, it's so rare that in order to forefeit a game, you have to do it twice. But it never happens, ever, because pattrn has never seen it. Case closed everyone.
I actually said the exact opposite. You're responding to an argument I didn't make.
> I made no judgement on what ChatGPT can and can't do. I pointed out an extreme. Which the commenter agreed was an extreme. The rest of your comment is completely irrelevant but congrats on getting tilted over something that literally doesn't concern you. Next time, just save us both the time and effort and don't bother butting in with irrelevant opinions. Especially if you couldn't even bother to read what was already said.
The commenter's throwaway account never agreed it was an extreme. I agreed it was an extreme, but also that disproving that one extreme does nothing to contradict his argument. Yet again you aren't responding to the argument.
This entire exchange is baffling. You seem to be missing the point for a third time, and now you're misrepresenting what I said. Welcome to the internet, I guess.
> The commenter's throwaway account never agreed it was an extreme.
> fine, fair, "never" was too much.
This is the second time I've had to do this. Do you just pretend things weren't said or do you actually have trouble reading the comments that have been here for hours? You make these grand assertions which are disproven by... reading the things that are directly above your comment.
> This entire exchange is baffling.
Yeah your inability to read comments multiple times in a row is extremely baffling.
As I said before:
> Next time, just save us both the time and effort and don't bother butting in with irrelevant opinions. Especially if you couldn't even bother to read what was already said.
Ah yes, of course, just because you never saw it means it never happens. That's definitely why rules exist around this specific thing happening. Because it never happens. Totally.
In fact, it's so rare that in order to forefeit a game, you have to do it twice. But it never happens, ever, because pattrn has never seen it. Case closed everyone.
I made no judgement on what ChatGPT can and can't do. I pointed out an extreme. Which the commenter agreed was an extreme. The rest of your comment is completely irrelevant but congrats on getting tilted over something that literally doesn't concern you. Next time, just save us both the time and effort and don't bother butting in with irrelevant opinions. Especially if you couldn't even bother to read what was already said.
No I definitely have, it’s just so rare I can’t remember when I last did it. I do remember playing one in a blitz tournament 20 years ago! But if this is the first game they played, or if it happens in 1/10 matches, that’s wild
Does that somehow prove the assertion of "something a 1400 ranked player would never do"?
Because all I'm hearing is talk about ChatGPT's abilities as a reply to me calling out an extreme statement as being extreme. Something the parent comment even admitted as being overly black and white.
I read an article about a pro player who castled twice in a game and my son hates castling so I make a point of castling twice as often as I can to tease him and attempting other illegal moves as a joke but he never ends the game because of it.
If I was playing that monstrosity though I would play something crazy that is far out of the opening book and count on it making an illegal move.
I trivially made it make an illegal move it my very first game, on the third move, just by deliberately playing weird moves:
> You are a chess grandmaster playing as black and your goal is to win in as few moves as possible. I will give you the move sequence, and you will return your next move. No explanation needed.
1. b4 d5 2. b5 a6 3. b6
> bxc6
No, it's ridiculous to say "oh, a blindfolded human might sometimes make a mistake." No, this is trivially easy to make it make a mistake. It has no internal chess model at all, it's just read enough chess games to be able to copy common patterns.
You know, I didn't remember the story very well so I checked wikipedia. Here's what it says about the (start of) the plot:
>> Two swindlers arrive at the capital city of an emperor who spends lavishly on clothing at the expense of state matters. Posing as weavers, they offer to supply him with magnificent clothes that are invisible to those who are stupid or incompetent. The emperor hires them, and they set up looms and go to work. A succession of officials, and then the emperor himself, visit them to check their progress. Each sees that the looms are empty but pretends otherwise to avoid being thought a fool.
So everyone "pretends otherwise to avoid being thought a fool".
They are disproving an assertion. Demonstrating that an alternate approach implodes the assertion is a perfectly acceptable route, especially when the original approach was cherry-picking successes and throwing out failures.
I wish I could just make bullshit moves and get a higher chess ranking. Sounds nice.
I disagree. If there is a procedure for getting ChatGPT to play chess accurately and you discard that and do some naive approach as a way of disproving the article, doesn't sound to me like you have disproven anything.
I dont understand the point of your second sentence, seems to be entirely missing the substance of the conversation.
You can spin it that way if you want to, but the result is essentially guiding it through a brute force of the first successful playthrough it can muster.
And it has already been stated elsewhere in the thread: an illegal move is not technically a forfeiture, so this is some heavy "giving the benefit of the doubt".
It would be interesting to see how ChatGPT would play after making the first illegal move. Would it go off the rails completely, playing an impossible game? Would it be able to play well if its move was corrected (I'm not sure how illegal moves are treated in chess; are they allowed to be taken back if play hasn't progressed?). Could it figure out it made an illegal move, if it was told it did, without specifying which one, or why it was illegal? By stopping the game as soon as an illegal move is made, the author is missing the chance to understand an important aspect of ChatGPT's ability to play chess.
I got the impression the author did this because they thought they were being fair with ChatGPT, but they're much more likely to be letting it off the hook than they seem to realise.
(Sorry about the "they"'s; I think the author is a guy but wasn't sure).
I was able to get better performance by always providing the prior sequence of moves and forcing ChatGPT to also respond with the sequence of moves up until its move.
Edit: I told the model that if the sequence was wrong or illegal, it forfeits the game. Without doing this, GPT would argue with me that it won and I didn't know the rules (serious).
> Occasionally it does make an illegal move, but I decided to interpret that as ChatGPT flipping the table and saying “this game is impossible, I literally cannot conceive of how to win without breaking the rules of chess.” So whenever it wanted to make an illegal move, it resigned.
But you can do even better than the OP with a few tweaks.
1. One is by taking the most common legal move from a sample of responses.
2. Telling GPT what all the current legal moves are telling it to only respond with an element from the list.
3. Ending the prompt with the current sequence of moves and having it complete from there.
When you are speaking to a person, they have inner thoughts and outer actions/words. If a person sees a chess board they will either consciously or unconsciously evaluate all the legal moves available to them and then choose one. An LLM like ChatGPT does not distinguish between inner thoughts and outer actions/words. The words that it speaks when prompted are its inner thoughts. There is also no distinction between subconscious and conscious thoughts. Humans generate and discard a multitude of thoughts in the subconscious before any thoughts ever make it to the conscious layer. In addition, most humans do not immediately speak every conscious thought they have before evaluating it to see whether speaking it aloud is consistent with their goals.
There's already a lot of research on this, but I strongly believe that eventually the best AIs will consist of LLMs stuck in a while loop that generate a stream of consciousness which will be evaluated by other tools (perhaps other specialized LLMs) that evaluate the thoughts for factual correctness, logical consistency, goal coherence, and more. There may be multiple layers as well, to emulate subconscious, conscious, and external thoughts.
For now though, in order to prompt the machine into emulating a human chess player, we will need to act as the machine's subconscious.
I feel like we have very different expectations about what tools like this are good for and how to use them. When I say GPT3 can play chess what I mean is, I can build a chess playing automaton where the underlying decision making system is entirely powered by the LLm.
I, as the developer, am providing contextual information like what the current board state is, and what the legal moves are, but my code doesn't actually know anything about how to play chess, the Llm is doing all the "thinking."
Like it's nuts that people aren't more amazed that there's a piece of software that can function as a chess playing engine (and a good one) that was trained entirely generically.
I don’t think this suffices as disproving the hypothesis. It’s possible to play at 1400 and make some idiotic moves in some cases. You really need to simulate a wide variety of games to find out, and that is what the OP did more of. Though I do agree it’s suggestive that your first (educated) try at an edge case seems to have found an error.
This is broadly the “AI makes dumb mistakes” problem; while being super-human in some dimensions, they make mistakes that are incredibly obvious to a human. This comes up a lot with self-driving cars too.
Just because they make a mistake that would be “idiots only” for humans, doesn’t mean they are at that level, because they are not human.
Honestly, I made it make an illegal move in my very first game, in the third move. You just have to do stuff no normal player would do:
> You are a chess grandmaster playing as black and your goal is to win in as few moves as possible. I will give you the move sequence, and you will return your next move. No explanation needed.
1. b4 d5 2. b5 a6 3. b6
> bxc6
That's obviously illegal.
... to all those who are saying "well even good players can make illegal moves sometimes," that's just ridiculous. No player makes illegal moves that often.
I played a game against it yesterday (it won) and the only time it made an ilegal was move 15 (the game was unique according to lichess database from much earlier) so I just asked it to try again. There's variance in what you get but your example seems much worse.
I had a similarish vision/idea about 7ish years ago https://github.com/Hactar-js/hactar I was way too early though and it didn't work well. I have recoded it probably half a dozen times since then and the latest prototype attempt used GPT as the core tech.
The biggest hurdle I found was that it just wasn't deterministic enough. You can work around this with mixing and matching custom compilers into the workflow but then you run into a "only works with a limited set of components and frameworks" problem; and at that point it is basically a boilerplate tool with extra steps.
I found a workflow with prompts + some Emacs wizardy to make them runnable with a few keystrokes got me 99% of the same feature set without any of the headaches of trying to fully automate it so I stopped work on it. A prompt library and an editor extension is hard to find a userbase for though so I have yet to open source or launch anything.
I'm still not sure if these sort of tools are too early or the exact right time to be built. They won't work well yet, but maybe the issues can be worked out when LLMs tooling matures, maybe not, hard to say. Watching closely to see what happens in the space!
I believe that now is the perfect time. Generative code generation is legit with GPT-3. I am constantly surprised at the results. And it will only get better.
This is a tip for everyone struggling to find someone to hire; look in a different pool of candidates and be willing to accommodate them. For example, you can find a lot of very skilled developers that are disabled.
I know a lot of people that have been coding professionally for 10+ years, have popularish open source projects, been lead devs at startups etc; yet feel stuck in their current jobs or even struggle to get hired. They would instantly job jump to something better or would love to switch from contract work to something with good health insurance/benefits. You just have to be a company willing to accommodate them. And accommodating their needs requires far less investment than paying FAANG level salaries to attract the same Sr engineers everyone else is competing for.
How exactly do you do that? I work for a company also struggling to find good senior developers and since the team is 100% remote, it's not like we'd have any idea if they were disabled or not...
First, I would post a detailed list of your benefits package. This package should include 100% fully paid for health insurance that requires zero additional money from the employee except for perhaps prescription co-pays (not to exceed $20/ea or something).
You should also have a generous time off package and generous/unlimited sick days allowance. In Europe, this is already the norm. If I'm sick for 2 weeks in bed, I still would get paid. Obviously I need a doctor's note after a 2 or 3 days, but that shouldn't be an issue if they really are that ill.
Finally, really take a hard look at your expectations for when work will be completed. If you have a team of 10 devs and 2 of them really only work on average 10 months a year due to their disability, can you justify that to management and still meet your deadlines? Preferably, you work in a field where the deadlines are all made up and you can build in generous buffers.
This is true in industries like gamedev already. The game is sometimes done months in advance of the launch (or should be, ideally) and then marketing takes over while the team transitions to writing DLC or working on the next project.
It's laudable that you'd want to take on hiring people with disabilities but you must be realistic with what you are getting yourself into, especially if those disabilities are chronic in nature and require frequent visits to a doctor.
In return for providing all of these benefits, you will gain some of the most loyal, hardworking, and compassionate team members and all around great human beings. I have friends with such disabilities and it's not a secret who the good companies are to work for. By providing excellent benefits like these, you'll also attract a lot of other great talent and perhaps make new business partnerships because of the network influence of your engineering team. It's well worth it, in my opinion.
- Do not put requirements into the job description if they aren't true requirements. For example, "ability to lift 25 pounds" is one that I see a lot, along with various other physical abilities that I don't think devs working from home actually need regularly.
- Offer extra paid (or unpaid!) time off, a 4-day work week, or other types of schedule flexibility.
- Minimize the required travel or indicate flexibility in this area for disabled applicants. (In-person meetups and adventures sound fun and totally reasonable for most able-bodied people who can travel alone, but for a wheelchair user who needs a personal care aide, your quarterly off-site to go skiing at Tahoe is, well, not a great fit.)
I'm answering mainly from the perspective of a physically disabled person with mobility issues. I'm sure people with other sorts of disabilities would come up with a very different list.
- On-call. Some people require a regular schedule with a limit of 40 hours per week.
- Allergies. Dog-friendly is not friendly to people allergic to dogs. And yes, your "hypoallergenic" dog will still give me an asthma attack.
- Dietary restrictions. Makes anything involving food the opposite of fun. (I'm allergic to milk proteins and cross-contamination can make me very sick.)
Technologies: React, JS, Ruby, Rails, Clojure, ClojureScript, GraphQL, Redux, MEAN Stack, VueJS, basically all the modern web stuff buzzwords, and of course I have 5-7 years experience with React Hooks
I'm a full-stack dev with extensive experience in modern JS stacks and Ruby. Constantly learning and I have dabbled in everything from C++ game dev to Haskell. Could easily learn a new stack or an de-rust an old one (PHP hate is silly). I have built my own frameworks https://github.com/eldr-rb/eldr, written books https://getgood.at/build-your-own/sinatra, and created tools that make developers lives easier https://github.com/Hactar-js/hactar. Would love to work on developer tools or tools for designers!
Full disclosure upfront: I'm on the autism spectrum and have struggled with interviews, take home assignments or trial runs are a necessity, no whiteboarding.
wow you must be fun at parties! (that is mostly just a pun on your username not meant to convey lots of snark)
anyways lol some of that is too real
jokes aside I think web development has come a long way and JS gets a bit of a bad rep. Full stack JS is insanely productive as a stack when you consider what you can build with it. It might be buggy and make you say "wat?" a lot but in exchange you can build insanely complex apps in timeframes that used to be the stuff of dreams. Granted many times what we build has no need to be so insanely complex, but when we really need it JS is there to make complicated things just magically easy to build.
I think all technologies have sharp edges somewhere, and the way to stay afloat in this business is staying alert.
Testing assumptions help, be it through quick REPL experimentation or a unit test.
JavaScript has some bad fame around unintuitive behavior but you can find these styles of issues in most languages. A random example: Scala. https://www.youtube.com/watch?v=uiJycy6dFSQ
I personally know that some crucial open source work is maintained by people with schizoid-avoidant spectrum issues. I know a lot of them but I won't out them here. hikikomori are driven to be invisible because their extreme pathological avoidance of attention. You don't know them and their contributions because they don't want you to know that they still live at home, out of their car door dashing because no company ever hires them, are shut-in because of serious unhealed trauma, are still deeply in poverty in such a wealthy industry etc.
A lot of these humans if given a no pressure handout of cash would likely contribute more to society. Would most not contribute? idk. But I do know that the contributions of those who would might offset all the others.
Many prominent pseudonymous devs have had hikikomori traits. _why practically inspired a generation of Ruby devs. visualidiot (RIP) was a crucial driver behind a lot of web dev culture in the 2010s. Heck, I made significant contributions to Joomla and WP themes back in the day -- you have probably used sites with themes or plugins I made. Also I ran a blog a decade ago that used to rank prominently in google and receive dozens of emails a month from people struggling with mental illness -- many people crediting me with saving their lives. Surely that is something of value to society.
Don't go around spreading bullshit like it is facts about a group of people we know little about.