Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you create a standardized test it will be gamed. Even with the small modicum of standardization around interview questions that we currently see, people have published books like Cracking The Code Interview, making it easier for people who don’t have the skills for a particular job to pass interviews at any place that uses standard-ish questions.

Furthermore, as an avowed enemy of “Clean Code”, I don’t want to see standardization because I fear that well-promoted ideas that I think are terrible would become required dogma. I prefer chaos over order that I don’t like.



The current system is already gamed and virtually standardized. The only difference that official standardization would present is that applicants would no longer have to go through the Leetcode gauntlet each time they want to switch jobs, which would save a breathtaking amount of time and effort currently occupied by wasteful redundancy in the interview process.

Corporations can use that standard exam/license as a baseline and then focus their interviews on domain-specific questions and the like. The existence of standardization does not negate custom processes.


> The current system is already gamed and virtually standardized.

This is only remotely true if you're looking at a very narrow slice of software development jobs. Those companies and jobs are overrepresented here, but remember that even in the US the majority of developers do not work at recognizable "tech companies." Much less the rest of the world.

I've been a professional software developer for over a decade, changed jobs many times, and never done an intense algorithm interview and I haven't been going out of my way to avoid them. I've even worked at some startups, though never one based in NY or SF. A handful of massive tech companies and their practices are disproportionately influential but they are not actually the norm speaking broadly.


This might be a regional thing, but I have done probably around 100 technical interviews in my career (both enterprise and startups) mostly in the Bay Area and the vast majority of these involved algorithm questions that had no relation with the job function. Most were around the difficulty of "find the largest palindrome in a string" or "reverse a singly linked list". On the harder end were things like "serialize and deserialize a tree".


I'll defend this a little bit in the sense that "had no relation to the job function" is just kind of unavoidable in interviews, or at least hard to avoid without paying major costs. The only way to have an interview that even comes close to reflecting real work is a pretty long take-home, and there are good arguments for not doing those (not least that most candidates really don't want to).

But yeah, the entangling of algorithms questions and coding questions is unfortunate. They're just separate skills. Some people are excellent coders who think "big-O" means something obscene, and some people are a walking discrete math textbook who can't problem-solve to save their lives. Triplebyte split (and Otherbranch splits) the two into separate sections, with coding problems explicitly designed NOT to require any of the common textbook algorithms. It's sometimes a little darkly funny how quickly a particular sort of candidate folds when asked to do something novel that steps outside what they've been able to memorize.


> and the vast majority of these involved algorithm questions that had no relation with the job function.

Consider the problem that you're hiring a software engineer and the company has has openings in several different teams that only have the job title in common.

Do you have four different sets of problems that are related to job functions? Does the interview take four times longer? Or do you extend offers to a dozen software developers this week and then have the teams that have the most need / the applicant appears to be best suited for add the headcount there?

If you are giving the same interview to all the candidates (so that you're trying to eliminate bias of asking different questions of different people) ... would that not tend to be solved by asking more abstract questions that are then compared to an agreed upon rubric as to which candidates were "meets expectations"?

... And what if it is for one team that has one set of problems... Do you have the candidates sign NDAs so that you can show them the actual problems that you then go pursue if something leaks? And if today's actual problem is solved tomorrow (unrelated to the applicants solution ... though I've experienced some "here is some real problems we are having" and startups trying to get an hour or two of consulting time for free with no intent to hire), do you give a different interview for someone next week?

The standardized and unrelated work means that you're not accidentally getting in trouble with HR with some bias in the questions or running afoul of FSLA by having someone do uncompensated work that might be related to real things being done.


I once got dinged at Facebook for using a tree serialization scheme that differed from the expected one in a way that saved linear space but made deserialization slightly harder to explain :)


You're not wrong, but Triplebyte exists to service that very narrow (but very well-funded, very lucrative, and very influential) segment, and so does this site, the fund behind this site, and most of the commenters in this thread.


True. Except that should be past-tense. Triplebyte existed to serve a narrow market, and failed largely because of that narrow view.

I think people really do underestimate how much FAANG and Silicon Valley practices have skewed the viewpoint of engineers and technology jobs in the United States. Not just in terms of comp, but in terms of architectural and technology approaches as well. Most of what the big guys do works for them at their enormous scales, but are plain dumb for the vast majority of companies and use cases. Yet we are all infected by the FAANG approach.


People underestimate how much cultural baggage influences things.

I'll give a very simple example. I did a few SWE interviews in 2020, and several companies did the initial screen over the phone, and the on-site over Zoom.

In both cases it was a remote interview. There was no reason not to do both over Zoom. The only reason was that the previous process was a phone interview and then an in-person onsite, and they realized they had to replace the in-person on-site with Zoom, but they didn't think to replace the phone screen. If you started from scratch it makes no sense though.

In this case, the whole origin of the Leetcode interview is "we're going to hire the smartest people in the world.". You can dispute whether that was true back in 2009 but it was certainly part of Google / Facebook's messaging. Now, in 2024, I think it has morphed much closer to a standardized test, and even if people might begrudgingly admit that, there's still the cultural baggage remaining. If a company used a third-party service, they'd be admitted they're hiring standardized candidates rather than the smartest people in the world. Which might be an "unknown known" - things that everybody knows but nobody is allowed to admit.


I definitely agree that this industry, for all of its self-proclaimed freethinking and innovation, is rife with cultural baggage. Allowing for an independent standardized interview step would defy the not invented here syndrome that many leading corporations ascribe to, that their process is best. Not to mention reducing friction for applicants (by don't repeating your Leetcode stage) is inimical to employee retention incentives, that is preventing them from shopping around for new employers. So me saying that we oughta have a standardized test to save everybody's time is more wishful thinking than anything.


This is definitely a factor. "You don't understand, we have a really high bar and we only hire the best people" is a bit of a meme in recruiting circles because you will never ever ever ever not hear it on a call.

I don't think we found it a barrier to getting adoption from companies though - perhaps because "we're a really advanced company using this state of the art YC-backed assessment" satisfies that psychological need? Unclear.


> but it was certainly part of Google / Facebook's messaging.

It entered the online cultural zeitgeist before that, with Microsoft talking about their interview processes, and indeed early interview books were written targeting the MSFT hiring process that many other companies copied afterwards.

I graduated college in 2006 and some companies still did traditional interviews for software engineers (all soft skills, and personality tests, no real focus on technology, except maybe some buzzword questions), and then you had the insane all day interview loops from MSFT and Amazon.

Back then, Google famously only hired PhDs and people from Ivy Leagues, so us plebs didn't even bother to apply. In comparison, Microsoft would give almost everyone who applied from a CS program at least a call back and a phone screen.


How do we let someone fly hundreds of people through the upper atmosphere with a certificate, but you can't make a login page with javascript without a unique multi-day interview for each distinct company?


Obviously the current situation is crazy, but part of the issue is that the specific asks for a particular developer job are dependent on 1) the stack in use and 2) the org chart at that company.

1 is obvious: if you need JS devs, most hiring managers won't want to hire Pascal devs and hope they figure it out. We can question the wisdom of this, but it is the reality.

2 is less obvious but not super obscure. Depending on how you structure your teams, similar positions at different companies might require more full-stack knowledge, or better people skills, or something else. IME there is little to no standardization here for developer roles, especially compared to something like HR or Accounts Payable, or even very similar IT-adjacent industries like game development.

Fix both of these issues and we would be able to have something more like a formal apprentice/journeyman/master system for various classes of software developer. As it is, each role actually is pretty much totally unique, at least compared to similar roles in other companies (there tends to be more standardization within the same company).


To (1): this is true, and more true than it should be, but I think this falls into the category of "trying to optimize expected value" moreso than "a hard requirement" at most employers. There's usually only like...maybe 1-2 hard tech requirements even for pretty picky roles if they're confident someone is good. It's just that they don't know ahead of time who is, so they may as well bet on the better-matched candidates.


For air transport in the U.S., it's not just one certificate, it's many. You get your private license, instrument rating, multi-engine rating, commercial certificate, instructor certificate, and finally the air transport certificate. And you're not allowed to even think about that last step until you've accumulated 1500 flight hours on the previous steps. Being allowed to write a Javascript login page is easy pickings compared to that.


My dad was an airline pilot. To hear him talk he was surrounded by morons.

That said, at least with airplanes you have to put in the flight hours and they have to be documented. Most commercial pilots start off at very low pay doing puddle jumpers and regional jets. Military experience might give you a leg up because it shows you've spent a lot of hours going through a very regimented program.

My dad was fortunate to get in the industry right before it became an option for the masses rather than luxury and business. It was high paying, glamorous job.


Every airplane flies pretty much the same way, and pilots all get paid pretty much the same*

Every website stack, and the level of complexity under it, is unique, and there's also a huge pay differential.

*could be false assumption on my part


> Every airplane flies pretty much the same way

Not exactly, but that's why we have different certificates, endorsements, and type ratings that show demonstrated competence with each type of airplane.


A popular aircraft type is likely to be built for 20+ years and be flying for 40 or more. The 737 was rolled out in 1967, and the fourth generation is still being built. This is rather like a major chunk of the world's computing infrastructure running on Fortran (F77, F90... 2008, 2023).

Oh, wait, it does.


I'm not exactly sure what your point is, but it's even stronger when you include F'66 on that list.


Maybe it’s time to start thinking about doing to software what has been done with other professional fields: licensing and checking out of various levels. If I have to spend 30 hours or so learning, practicing and demonstrating my knowledge about aircraft instrument procedures before I can attempt that as a pilot in real airspace, maybe it’s not that big of a jump that we’d license different software features, and going outside of those bounds would be subject to loss of license.

Then we’d know this set of language features you’re familiar enough with to hold that cert. It might cut down on the waste and proliferation of useless tech that seems to be strangling the industry because people just want it on their resume.

It would do enough to dissuade companies from hiring non-licensed engineers (hey you could actually call yourself and engineer and not feel like an imposter), and would put a hard liability on things that definitely need it: financial and health data, which seems to be ripe for exploit and disclosure.

One way or another the insanity of the current system needs to stop.


I think the hard thing is that there’s just a lot of mediocre programmers out there writing mediocre software. Should they be accredited or not?

I think a lot of average programmers will end up accredited if they see it as a path to a job, just like we see with Microsoft certificate programs. And if that happens, I wouldn’t want the accreditation test to be my company’s hiring bar. I’ll still run my own interview to make sure candidates are actually good. And so will most companies. So the time consuming interviews won’t actually go away.

The one big problem licensing would solve is that we could insist some baseline amount of security knowledge is in the tests. Right now, plenty of people get jobs handling sensitive personal data without knowing the first thing about how to keep data secure. That just can’t continue. It’s insane.


I have interviewed people who have attested to having a Java certification from Oracle that while they were able to pass that test, they were unable to use their knowledge to develop solutions or solve problems.

I could ask about how the class loader worked or the syntax associated with a particular construct (that that language level - not anything later) and get the correct answer.

They could pass tests and follow instructions.

Licensure for problem solving is difficult. Extend that to different domains and it is an even harder problem to solve.

https://www.nspe.org/resources/pe-magazine/may-2018/ncees-en...

> The Software Engineering PE exam, which has struggled to reach an audience, will be discontinued by the National Council of Examiners for Engineering and Surveying after the April 2019 administration. The exam has been administered five times, with a total of 81 candidates.

> NCEES’s Committee on Examination Policy and Procedures reviews the history of any exam with fewer than 50 total first-time examinees in two consecutive administrations and makes recommendations to the NCEES Board of Directors about the feasibility of continuing the exam.

> In 2013, the software exam became the latest addition to the family of PE exams. The exam was developed by NSPE, IEEE-USA, the IEEE Computer Society, and the Texas Board of Professional Engineers—a group known as the Software Engineering Consortium. Partnering with NCEES, the consortium began working in 2007 to spread the word about the importance of software engineering licensure for the public health, safety, and welfare.

> This collaboration was preceded by Texas becoming the first state to license software engineers in 1998. The Texas Board of Professional Engineers ended the experience-only path to software engineering licensure in 2006; before the 2013 introduction of the software engineering PE exam, licensure candidates had to take an exam in another discipline.


Not a pilot but it sounds like 250 hours minimum to become one commercially [1]. My guess is that unless you can buy your own airplanes it'll take more than that for somebody that owns a plan to trust you being in charge on them to get to your 250 hours.

It varies on your state but to become a Master X it often requires >5 years. (ex 5 for CO Master Plumber [2], 4 for TX Master Plumber [3], 7 or 5 + Bachelors in NY [4]). Imagine needing to pair program with somebody for 5 years before you could be considered Senior. That'd probably cause a lot more professional development than the current copy from stack overflow meta but also really irritate a lot of young professionals. You can't even quit and start your own business because you can't write software as a journeyman without senior supervision!

[1]: https://www.aopa.org/training-and-safety/active-pilots/safet...

[2]: https://dpo.colorado.gov/Plumbing/Applications

[3]: https://tsbpe.texas.gov/license-types/master-plumber/

[4]: https://www.nyc.gov/site/buildings/industry/obtain-a-master-...


It would destroy coding boot camps and outsourcing but many people already pursue B.S. degrees in Computer Science or Computer Engineering. If the laws changed to require a 5 year paid apprenticeship that allowed you to skip the college degree I don't think too many people entering the field would be upset so long as we planned and accounted for the transition (like a path towards accreditation for currently employed software developers since no accredited "masters" exist to pair under right now).

I think another issue is that there's no feasible inspection process or liability management. We have crap like SOC2 and PCI compliance but they're so dependent on self reporting that they mean little practice. Mountains of spaghetti code accumulated over decades are not inspectable like a building is. Software salary costs are already very high and this would push it even further up. It would eliminate offshoring/outsourcing as an option from businesses and they would lobby hard against it. Uncertified software products from other countries would need import controls, and all other sorts of issues that don't exist in our unregulated environment right now

It's also hard to imagine what sort of massive software failure would be required to spur regulatory change that we haven't already experienced and collectively shrugged and did nothing about.


what do you think the interview process for airline pilot looks like?

>“Hello, I am a certified pilot.”

>“Great! You start tomorrow!”


I'm sorry, are you serious or being deliberately obtuse?

Even the most minimal pilots license, a.k.a. the PPL, still requires approximately 40 hours give or take of instruction, taking the ground school exam, and finally passing a check ride with a certified flight instructor.

Flying a passenger airliner as you seem to be alluding to basically means that you have a commercial license and likely an ATP - we're talking hundreds if not thousands of hours of flight experience.

Unbelievable.


Pilots have it easy because they actually have a series of tests they can take to prove they can do something. Once they have it then that's done, proof acquired. I have around thirty thousand hours working as a software engineer and have been part of many successful product deliveries, have progressively greater accomplishments, and hold a degree in computer science and all of that is worth almost zilch to the next interviewer.


> If you create a standardized test it will be gamed.

Well, the medial profession has a standardized licensing process. It's not perfect, but it certainly keeps the interview process to (mostly) mutual interest.

I think we can learn from the medical profession here. Otherwise, "I prefer chaos" implies that the incompetents are the ones who are the ones who will lose.


Why do standardized tests work for so many other industries?


Just out of curiosity, what are some of the problems with "Clean Code"? I thought most of it made sense as basic guidelines. It's been a while since I read it though


I think https://qntm.org/clean makes a good case that the advice it gives can be taken to very bad extremes -- and that the author of the book does so in some cases when providing "good" examples. That's not to say that the advice is all bad, but that the book as a whole is not a good presentation and inexperienced programmers can enthusiastically learn the wrong lessons from it.

Edit: grabbed the wrong link from my history. Updated to the correct link.


I think the root problem is that a lot of people want books to tell them how to think. I think that's why I hate things like Oprah book clubs, complete with quizzes to make sure you think the right things now.

My best reading experiences involve arguing with the book. And talking about those books and my disagreements with them has been useful, too.

Orthogonally, all humans tend to overuse new knowledge/skills. That's part of how humans learn. We try to find out how far the use stretches and in what ways we can apply our new toys! I would expect any successful book on practices to be seen as overused.


In my opinion, the only significant contribution Clean Code made was the concept of clean code. The problem is that my definition of clean code is almost completely contradictory to what the author of the book thinks constitutes clean code.


Honestly it’s DRY that I oppose more than anything else, I’ve watched too many codebases turn into unreadable spaghetti because engineers thought everything needed to be abstracted. With regard to Clean Code, I think Uncle Bob’s takes on function length are ridiculous (something like “functions should almost never be over 4 lines”). In general, I just feel like he thinks very little of programmers and comes up with rules with an eye towards constraining bad programmers, not empowering good programmers.


Here’s a great example of why it could be ineffective: https://youtu.be/tD5NrevFtbU


Standardization reminds me of old stories about 1970-80's blue chip companies trying to hire programmers like they hired secretaries. They'd test applicants for things like word per minute typing speed, simple programming tests, hire in bulk and then dole batches of them out to various departments. Which sounds like triplebytes model, the motivation behind things like clean code and the webshitification of everything.

Opposite of that is the idea that work and interpersonal habits, communication skills, and domain knowledge are more important than raw programming skill for most jobs.


Standardized process doesn't have to mean a purely checklist-based rubric. Triplebyte wasn't - and Otherbranch especially isn't - devoted to the idea that a good engineer can be reduced to checkboxes. And speaking for myself as a founder, I in particular believe very strongly in the idea of intangibles as important criteria. Having a standard process makes intangibles easier to detect, not harder, because you can look for those ethereal little bits of signal against a familiar backdrop.

The last question on the grading form for our interviewers is, to quote it exactly in its current form:

-----

Full interview overall score

(Would you vouch for this person's skills?)

* No no no no no

* Some redeeming qualities but not someone you'd recommend even as a junior hire

* Good for a junior role, still a bit short of a senior one

* Would recommend for a senior role

* Incredible, get this person a job right now

-----

That, to me, is the opposite of what you're talking about. Expert opinion is a central part of what we do, and a big part of what I think gives us an advantage over something like an automated coding test. We just take treat expert opinion as a type of data in its own right so that we can, for example, adjust for whether one interviewer grades more harshly on average than another and make sure that it is actually producing valid results down the line.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: