Hacker Newsnew | past | comments | ask | show | jobs | submit | coderjames's commentslogin

I'm an embedded software engineer by day and like it or not I have to acknowledge that AI tooling is coming to our work, so I'm currently working on learning to interact with AI coding tools like Claude Code more effectively and efficiently by "vibe-coding" a game for a family member on my personal time. Something inspired by a blend of 'Recettear' and 'Stardew Valley' with the touch that the player shopkeeper is an anthropomorphic cat.

Not OP, but one example I can think of: Jeff Bezos moved from Washington state to Florida two years after Washington enacted a 7% capital gains tax "on the sale or exchange of long-term capital assets such as stocks, bonds, business interests, or other investments and tangible assets"[1] which "reportedly helped him save $1 billion in taxes."[2]

[1]: https://dor.wa.gov/taxes-rates/other-taxes/capital-gains-tax

[2]: https://finance.yahoo.com/news/jeff-bezos-moved-florida-impa...


You don't support trying to save the planet?

The Bezos Earth Fund: https://www.bezosearthfund.org/


The planet will be just fine. It measures consequential time in many millions of years. You mean: support saving humanity.


I mean, yeah. When people way saving the planet they mean saving humanity. That's exactly it. A barren rock does no one no good. I don't get it why people hang onto this expression, it's as if you heard that George Carlin bit and now that's your anchor to reality.


It's not like the dinosaurs had a save the earth campaign. Yet, before humans the rock had life forms that died out while the rock itself continued being a viable planet supporting life. If humans die off, the planet will continue on with life continuing in new ways.


For the past 50+ years there really has been a somewhat significant and quite influential body of people who genuinely want to preserve the planet’s ecosystem even at the expense of the people living on it.


I think it might be a organizational architecture that needs to change.

> However, we have never before applied a killswitch to a rule with an action of “execute”.

> This is a straightforward error in the code, which had existed undetected for many years

So they shipped an untested configuration change that triggered untested code straight to production. This is "tell me you have no tests without telling me you have no tests" level of facepalm. I work on safety-critical software where if we had this type of quality escape both internal auditors and external regulators would be breathing down our necks wondering how our engineering process failed and let this through. They need to rearchitect their org to put greater emphasis on verification and software quality assurance.


> In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages

> Critical infrastructure still written in C - particularly code that parses data from untrusted sources - is technical debt that is only going to get worse over time.

But hasn't all that foundational code been stable and wrung out already over the last 30+ years? The .tar and .ar file formats are both from the 70s; what new benefits will users or developers gain from that thoroughly battle-tested code being thrown out and rewritten in a new language with a whole new set of compatibility issues and bugs?


I wish, but I get new security bugs in those components like every year or so, not all are tracked with security updates to be fair, some we say it's your own fault if you use the library to parse untrusted code.

After all the library wasn't designed around safety, we assumed the .debs you pass to it are trusted in some way - because you publish them to your repository or you are about to install them so they have root maintainer scripts anyway.

But as stuff like hosting sites and PPAs came up, we have operators publishing debs for untrusted users, and hence suddenly there was a security boundary of sorts and these bugs became problematic.

Of course memory safety here is only one concern, if you have say one process publishing repos for multiple users, panics can also cause a denial of service, but it's a step forward from potential code execution exploits.

I anticipate the rewrites to be 1 to 1 as close as possible to avoid introducing bugs, but then adding actual unit tests to them.


"The best laid plans of mice and men often go awry."


> But hasn't all that foundational code been stable and wrung out already over the last 30+ years?

Not necessarily. The "HTTP signature verification code" sounds like it's invoking cryptography, and the sense I've had from watching the people who maintain cryptographic libraries is that the "foundational code" is the sort of stuff you should run away screaming from. In general, it seems to me to be the cryptography folks who have beat the drum hardest for moving to Rust.

As for other kind of parsing code, the various archive file formats aren't exactly evolving, so there's little reason to update them. On the other hand, this is exactly the kind of space where there's critical infrastructure that has probably had very little investment in adversarial testing either in the past or present, and so it's not clear that their age has actually led to security-critical bugs being shaken out. Much as how OpenSSL had a trivially-exploitable, high criticality exploit for two years before anybody noticed.


Actual cryptography code, the best path is formally verified implementations of the crypto algorithms; with parsers for wrapper formats like OpenPGP or PKCS#7 implemented in a memory safe language.

You don't want the core cryptography implemented in Rust for Rust's sake when there's a formally verified Assembler version next to it. Formally verified _always_ beats anything else.


I should have clarified that I was primarily referring to the stuff dealing with all the wrapper formats (like PKIX certificate verification), not the core cryptographic algorithms themselves.

The core cryptographic algorithms, IMHO, should be written in a dedicated language for writing cryptographic algorithms so that they can get formally-verified constant-time assembly out of it without having to complain to us compiler writers that we keep figuring out how to deobfuscate their branches.


Sure. But assembly implementations by definition are not portable. And I don’t know what it takes to write a formally verified library line this, but I bet it’s very expensive.

In contrast, a rust implementation can be compiled into many architectures easily, and use intrinsically safer than a C version.

Plus cryptography and PKI is constantly evolving. So it can’t benefit from the decades old trusted implementations.


> Formally verified _always_ beats anything else.

Formally verified in an obscure language where it's difficult to find maintainers does not beat something written in a more "popular" language, even if it hasn't been formally verified (yet?).

And these days I would (unfortunately) consider assembly as an "obscure language".

(At any rate, I assume Rust versions of cryptographic primitives will still have some inline assembly to optimize for different platforms, or, at the very least, make use of compile intrinsics, which are safer than assembly, but still not fully safe.)


With crypto, you really want to just write the assembly, due to timing issues that higher level languages simply cannot guarantee.


It's insanely complex, particularly you want _verified_ crypto. Last year (or two years ago?) I had to fix a tiny typo in OpenSSL's ARM assembly for example, it was breaking APT and Postgres left and right, but only got triggered on AWS :D


You don't want to write the whole thing in assembly, just the parts that need to be constant time. Even those are better written as called subroutines called from the main implementation.

Take BLAKE3 as an example. There's asm for the critical bits, but the structural parts that are going to be read most often are written in rust like the reference impl.


Yes, for sure.


I would like a special purpose language to exist precisely for writing crytographic code where you always want the constant time algorithm. In this niche language "We found a 20% speed-up for Blemvich-Smith, oops, it actually isn't constant time on the Arrow Lake micro-code version 18 through 46" wouldn't even get into a nightly let alone be released for use.

It seems that for reasons I don't understand this idea isn't popular and people really like hand rolling assembly.


There's been plenty, like RobustIsoCrypt or FaCT:

https://github.com/PLSysSec/FaCT

They struggle to guarantee constant time for subroutines within a non-constant time application, which is how most people want to use cryptography.


I do think this is pretty much the one use case for a true "portable assembler", where it basically is assembly except the compiler will do the register allocation and instruction selection for you (so you don't have to deal with, e.g., the case that add32 y, x, 0xabcdef isn't an encodable instruction because the immediate is too large).


You can't avoid those with NASA Power of 10 sorts of restrictions?


If you mean GnuPG, that is what Snowden used. It could be better than new software that may have new bugs. Memory safety is a very small part of cryptographic safety.

(New cryptographic software can also be developed by all sorts of people. In this case I'm not familiar, but we do know that GnuPG worked for the highest profile case imaginable.)


GPG works great if you use it to encrypt and decrypt emails manually as the authors intended. The PGP/GPG algorithms were never intended for use in APIs or web interfaces.

Ironically, it was the urge not to roll your own cryptography that got people caught in GPG-related security vulnerabilities.


There are none. This is a canonical employee trying to force Ubuntu's decisions (rust coretools) on the wider Debian community.

Additionally, the fact that this comes across as so abrasive and off-putting is on brand for online Rust evangelicalism.


> But hasn't all that foundational code been stable and wrung out already over the last 30+ years?

No: a little less than 5 years ago there was CVE-2020-27350, a memory safety bug in the tar/ar implementations.


But just this year there was CVE-2025-62518 in tokio-tar.


Every software is stable and wrung out until someone finds an exploit.


Recently the rust coreutils had a bug and this essentially disabled auto-updates on Ubuntu. :)

Seeing this tone-deaf message from an Ubuntu employee would be funny if I didn’t actually use Ubuntu. Looks like I have to correct that…


Isn't it also funny that all of these things are done by the same person?

In all seriousness though, let me assure you that I plan to take a very considerate approach to Rust in APT. A significant benefit of doing Rust in APT rather than rewriting APT from scratch in Rust means that we can avoid redoing all our past mistakes because we can look at our own code and translate it directly.


Honestly having seen trainwreck after trainwreck after trainwreck come out of Canonical for the last decade, I'm sure I'm not the only one that has strong doubts about anyone associated being able to "avoid redoing past mistakes" or to make things not suck.


You have never been skilled at being considerate:

https://github.com/keepassxreboot/keepassxc/issues/10725#iss...


Christ that was handled badly.


Seems reasonable. I wish you would have written that in your original message. Good luck…


I worked with ASN.1 for a few years in the embedded space because its used for communications between aircraft and air traffic control in Europe [1]. I enjoyed it. BER encoding is pretty much the tightest way to represent messages on the wire and when you're charged per-bit for messaging, it all adds up. When a messaging syntax is defined in ASN.1 in an international standard (ICAO 9880 anyone?), its going to be around for a while. Haven't been able to get my current company to adopt ASN.1 to replace our existing homegrown serialization format.

[1] https://en.wikipedia.org/wiki/Aeronautical_Telecommunication...


Isn't PER or OER more compact? especially for the per-bit charging thing


Oh yeah, derp. I was thinking unaligned-PER, not BER.


of all the encoding i like BER the most as well

(i worked in telecommunications when ASN.1 was common thing)


> This friend told me she can't work without ChatGPT anymore.

It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.


> those who want to parse a JSON document with a YAML parser.

I've done it. We already had a YAML parser in an internal library I maintain since we were already ingesting YAML files for other reasons, so when we later added new files for a different reason that someone decided should be in JSON instead, it was easier and cleaner to keep using the existing YAML parser we already had incorporated rather than add a separate JSON parser along side it.


It has not so far.

"Experts say that airdrops, another measure Israel announced, are insufficient for the immense need in Gaza and dangerous to people on the ground."[1]

"[T]he airdrops have an advantage over trucks because planes can move aid to a particular location very quickly. But in terms of volume, the airdrops will be 'a supplement to, not a replacement for moving things in by ground.'"[2]

The airdrops killed people when 1) the containers landed on occupied tents and, 2) containers landed in the water and people drowned attempting to retrieve the aid. Trucks can also delivery vastly larger quantities of aid substantially faster and cheaper than planes.

[1] https://apnews.com/article/gaza-starvation-israel-palestinia...

[2] https://apnews.com/article/israel-hamas-gaza-airdrop-humanit...


My department at the place I work is actively hiring Software Engineers. We have nine open requisitions for any seniority level and are regularly conducting interviews, but the new-grad candidates this year have been... disappointing.

I've conducted two phone screens this month and asked each candidate to implement FizzBuzz in their language of choice after giving them an explanation of the problem. Both took more than ten minutes to write out a solution and we don't even require them to run it; I'll excuse trivial syntax errors in an interview setting if I can tell what you meant.

When CS students can't write a basic for loop and use the modulo operator without relying on AI, I weep for their generation.


I also tutor students in the entry level C++ and Python courses (which are taken during your first two semesters as a CS student), and I must agree that a large cohort of my class is only able to program if they have ChatGPT/Claude open on one half of their screen. I'm not sure how to solve this either, unless we want to start doing in person "interview" styled questions as an exam on a locked down computer.

I honestly think that doing an in person fake technical interview with a few easy Leetcode questions at the end of your education would be a good way to weed out those that have failed to even learn the basics of the trade.


I'm so old I remember when calculators started appearing in general people's hands. Schools first banned them (what da ya mean you can't add a column of numbers by eye?) But gradually we switched over. We had a flirting interaction with log tables, and never did get to use a slide rule. I've no doubt old-school businesses were aghast at our ineptitude.

I'm so old we learned to program with giant C reference books. There was no internet, much less Google. We didn't have no fancy auto-complete, crumbs a text editor was considered advanced. Them youngsters coming to us couldn't program without Googling syntax, or using an IDE.

So yeah, sure, AI is changing the game. It's hard to evaluate students because the tools they are using are different to our experience. For decades we "make them code" as a measure of ability. In 3 years (their college experience) the toolset has changed.

Good students, good employees, are those who understand the problem and can adapt to a solution. AI is a tool that can be wielded well, or badly. Our approach to hiring will need to adapt as well. But good people are still out there, and good people make good workers.

To be honest I never was much in love with the leet code measure of hiring. Past a certain coding skill level I was more interested in the person than their ability to memorize an algorithm. Today that necessary skill level is lower, or at least harder to evaluate, but the problem-solving-mind is still the thing we're looking for.

So be careful of seeing the use of new tools as a weakness. The history of the world is littered with obsolete technologies. (Here's a sextant, where are we?) Rather see people who use tools for what they are, tools. Look for people who are curious, who see patterns, who get things done.

And to students I say, mastery of tools is a necessary step, but ultimately an uninteresting one. See beyond them. Be curious. Look under the hood. Ask questions like "is this code good enough to be running 30 years from now?" Because a huge amount of what you see now has foundations in code written a long time ago, and written well enough to stand for decades.

College is not "learning to program". College is learning how to adapt to an ever changing world, that will require your adapting many times over your career.


> College is not "learning to program". College is learning how to adapt to an ever changing world, that will require your adapting many times over your career.

You're gonna have to do a lot of work to convince me that people who only know how to drive an LLM are learning how to adapt to sweet fuck all

At least with a calculator, people still had to know the difference between addition and multiplication, in order to use the calculator correctly


> You're gonna have to do a lot of work to convince me that people who only know how to drive an LLM are learning how to adapt to sweet fuck all

Driving an LLM properly requires knowing to evaluate if the results are correct. People can certainly try to pass generated code over for PR. But even just one code feedback or debugging should uncover if the person understood what they were doing.


What if driving an LLM well is actually a desirable skill?

What if changing from a "write code" based idea of programming changes to a "remove technical debt from code" skill?

What if the next generation of programmers is not focused on the creation of new code, but rather the improvement of existing code?

What if the current crop of programmers has to literally adapt from a world that has valued code quantity to a world that values code quality (something we dont especially prioritize at the moment?)

I'd argue that we're asking the current generation to be massively adaptable in terms of what was expected of us 10 (or 30) years ago, as to what will be required of them 5 years from now.

And to be clear, I'm not suggesting that LLMs will teach them to be adaptable. I'm suggesting that a world that contains LLMs will require them to be adaptable.


> What if changing from a "write code" based idea of programming changes to a "remove technical debt from code" skill

I don't believe you can do this if you can't write code, but sure. Maybe

> What if the current crop of programmers has to literally adapt from a world that has valued code quantity to a world that values code quality

LLMs seem more likely to increase the value of quantity and decrease the value of quality. That's playing out in front of us right now, with people "vibecoding"

> I'm suggesting that a world that contains LLMs will require them to be adaptable.

And ones who can't adapt will be ground to mulch to fuel the LLM power plants no doubt


Your response just triggered a deja-vu from back when scaffolding tools were the new hot thing, now everyone and their dog was able to spin up that todo application within one CLI command. Except the generated code was mostly boilerplate that had to be heavily adapted for any real life use case, unveiling all the ignorance that could be covered up to that point. It's the same with vibe code. Looks fun until you throw it into reality - and then you're on your own and better know how to deal with stuff.


i don't think you can compare Calculator to LLM.

A calculator will always give you correct result as long as you give it correct input. This is not the case with LLM. No matter how good your prompt is, there always a chance the output is completely garbage.


One big problem from the hiring side is the time to evaluate someone once complex tools are involved.


did you ever consider the idea that AI is not the same as a calculator? or consider the fact that there is no reason why there couldnt be another quantum leap next year? and another one after that?


I teach computer science at a community college in Silicon Valley. Even before generative AI became available to the general public, cheating has been an issue with CS programming assignments.

One way I try to disincentivize cheating on projects is by having in-class paper exams, including weekly quizzes, as well as in-class paper assignments, and making sure that these in-class assessments are weighted significantly (roughly 60% of the overall grade). No electronic devices are allowed for these assignments. This forces my students to be able to write code without being able look up things online or consult an AI tool.

I still assign take-home programming projects that take 1-2 weeks to complete; students submit compilable source code. Practical hands-on programming experience is still vital, and even though cheating is possible, the vast majority of my students want to learn and are honest.

Still, for in-person assessments, if I had the budget, I’d prefer to hand out laptops with no Internet connection and a spartan selection of software, just a text editor and the relevant compiler/interpreter. It would making grading in-class submissions easier. But since we don’t have this budget, in-class exams and exercises are the next best solution I could think of.


This reply will likely sound disrespectful, but I post it not to be so, but rather to perhaps spark an alternate path.

As the world changes, education can be slowest to adapt. My father did his math on a slide rule. I was in high school as we transitioned to using calculators.

My personal take on your approach is that you're seeing this from the wrong side. Creating an artificial environment for testing suggests to me you're testing the wrong thing.

Of course most school, and college, classes devolve to testing memory. "Here's the stuff to learn, remember it enough to pass the exam." And I get it, this is the way it's always been, regardless of the uselessness of the information. Who can remember when Charles 1st was beheaded? Who can't Google it in an instant?

Programing on paper without online reference tools isn't a measure of anything, because in the real world those tools exist.

Indeed, the very notion that we should even be testing "ability to write code" is outdated. That the student can create code should be a given.

Rather an exam should test understanding, not facts. Here's 2 blocks of code, which is better and why? Here's some code, what are the things about it that concern you?

Instead of treating the use of AI (or Google, or online help, or that giant C reference book I had) as "cheating", perhaps teach and assess in a world where AI exists.

I truly do get it. Testing comprehension is hard. Testing understanding is hard. Testing to sift wheat from chaff is hard. But, and I'm being harsh here i know, testing memory as a proxy for intelligence or testing hand-code-output as a proxy for understanding code is borderline meaningless.

Perhaps in the age of AI the focus switches from 'writing code' to 'reading code'. From the ability to write to the ability to prompt, review, evaluate and so on.

Perhaps the skill that needs to be taught (to the degree that community college seeks to teach skills) needs to be programing with AI, not against it.

I say all this with respect for how hard your job is, and with my thanks that you do it at all. I also say it understanding that it's a huge burden on you that you didn't necessarily sign up for.


The problem is that tools like AI are useful if and only if you have the prerequisite knowledge, otherwise they are self-destructive.

It's similar to a calculator. We give student graphing calculators, but ONLY after they have already graphed by-hand hundreds of times. Why? Because education does not work like other things.

Efficiency, in education, is bad. We don't want to solve problems as fast as possible, we want to form the best understanding of problems possible. When I, say, want to book an airplane ticket, I want to do that in the fastest way possible. The most efficient manner. I care not about how an airport works, or how flight numbers are decided, or how planes work.

But efficient education is bad education. We can skip 99% of education, if we wanted. We can have, say, the SAT - and spend 1 year studying only for the SAT. Don't bother with the other 12 years of schooling.

Will you get an acceptable score on the SAT this way? Maybe. Will you be intelligent? No, you will be functionally illiterate.

If we use AI for programming before we can program, then we will be bad programmers. Yes, we can pass a test. Yes, we can pass a quiz. But we don't know what we're doing, because education is cumulative. If we skip steps, we lose. If we cut corners, we lose. It's like trying to put a roof on a house when the foundation isn't even poured.


I wish I could have gone to these schools where testing is just memorization. Everything would have been so easy


I'm not sure how to solve this either, unless we want to start doing in person "interview" styled questions as an exam on a locked down computer.

Don't lock down the computer unless you are hiring people to work in a SCIF. Instead, give candidates a brutally hard/weird problem and tell them to use any resources they can get their hands on, by fair means or foul. (They will do that anyway if you hire them.) Then watch how they deal with it.

Do they just give up and stalk off in a huff?

If they Google for answers, do they use sensible queries?

If they use AI, do their prompts show skill at getting ideas, avoiding blind alleys, and creating effective tests?

If they call their friends, see how effective they are at communicating the requirements and turning the answers into a solution. Might be management material.


I’ll second this, and we had enough resumes to only interview those with a relevant Master’s degree. I was shocked and I still don’t have a full explanation. I don’t doubt that it’s also hard out there, but on the hiring side we also did far more interviews than we wanted. (And yes the salary is >>100k, full remote, benefits etc)


> When CS students can't write a basic for loop and use the modulo operator without relying on AI, I weep for their generation.

I feel like this doesn't get said enough, but I'm almost certain your issue is happening during filtering prior to even getting to the interview stage. Companies are straight up choosing (the wrong) applicants to interview, the applicant fails the interview, the company does not move forward with them, and then the companies does not go back and and consider the people they originally filtered out.

I know companies get swamped with tons of applications, and filtering is basically an impossible problem since anyone can make their resume look good, but every applicant that applied can't be that bad.

Bad applicant filtering at the first step is hurting both companies and applicants.


2 data points and you're drawing a conclusion about an entire graduating class? For all we know, you might be experiencing a reality that you're company isn't able to attract great young talent.


FizzBuzz was always a great filter. Even in the pre-LLM days. Many people can code for years and never once use the modulo operator. Solving the problem gets a lot more clunky without it and they get rejected.


Yes but also it is one of the most common programming questions for non-FAANG companies. Are grads not preparing for interviews? It is one google search to Jeff Atwood’s blog.


When I was in school in the early 2010s I was working in a professor's lab and overheard conversations that the administration was telling profs/TAs to pass kids who profs/TA's thought should have failed. I've since seen the required coursework to graduate become less rigorous. There were students I worked with personally I graduated with who were very bad. I'm sure there still great students who care about learning but I cannot imagine how bad the average student is with ChatGPT being able to do student's assignments.


Are you offering enough pay that competent people would want to work there?


We're in the greater Seattle area and I make north of $200k, so I feel like yes :shrug:


My experience is that this problem significantly predates AI. Not that AI won't make it worse, but pre-2020 the majority of entry level developer applicants I interviewed could not write a basic for loop in their choice of language, never mind the modulo operator.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: