Hacker Newsnew | past | comments | ask | show | jobs | submit | willahmad's commentslogin

> Staff Eng .... has much more job mobility

Not really.

Above Staff and Staff+ companies are usually looking for expertise in domain, in addition to cross org leadership. Unless you want to get hired with Sr title.

Management is different though, you have highly transferrable skillset, managing people, up and down.


> you have highly transferrable skillset

Of course this also means the pool of people who can do your job or quickly learn it includes essentially every other EM.

And many of those people are looking for jobs now.

For an IC, no one can become an expert in Rust overnight.


> Management is different though, you have highly transferrable skillset, managing people, up and down

Most tech companies are not hiring an EM without relevant domain experience. "People Management" is a table stakes skill in 2026 and Staff/Principal Engineers and Product Managers largely offer that as well as technical or product insight.

Additionally, it's something that can be cultivated in-house and is why internal promotions to EM tend to be preferred unless a director, principal engineer, or PM is getting their friend a job (which happens fairly often).


Can you share more about data availability part?

What kind of data do you need which is missing now, isn't simulations enough? curious to learn more about bottlenecks in general


I think it’s a combination of simulation, YouTube videos, and specially recorded training footage. The last one is expensive, but given the funding these startups receive, I’m pretty sure they can scale their RL methods at least 10x.

without language level support, it makes code look like a mess.

Imagine, 3 level nesting calls where each calls another 3 methods, we are talking about 28 functions each with couple of variables, of course you can still clean them up, but imagine how clean code will look if you don't have to.

Just like garbage collection, you can free up memory yourself, but someone forgot something and we have either memory leak or security issues.


With good helpers, it could become something as simple as

    key := make([]byte, 32)
    defer scramble(&key)
    // do all the secret stuff

Unless I don't understand the problem correctly.


There are two main reasons why this approach isn't sufficient at a technical level, which are brought up by comments on the original proposal: https://github.com/golang/go/issues/21865

1) You are almost certainly going to be passing that key material to some other functions, and those functions may allocate and copy your data around; while core crypto operations could probably be identified and given special protection in their own right, this still creates a hole for "helper" functions that sit in the middle

2) The compiler can always keep some data in registers, and most Go code can be interrupted at any time, with the registers of the running goroutine copied to somewhere in memory temporarily; this is beyond your control and cannot be patched up after the fact by you even once control returns to your goroutine

So, even with your approach, (2) is a pretty serious and fundamental issue, and (1) is a pretty serious but mostly ergonomic issue. The two APIs also illustrate a basic difference in posture: secret.Do wipes everything except what you intentionally preserve beyond its scope, while scramble wipes only what you think it is important to wipe.


Thanks, you brought up good points.

While in my case i had a program in which i created an instance of such a secret , "used it" and than scrambled the variable it never left so it worked.

Tho i didn't think of (2) which is especially problematic.

Prolly still would scramble on places its viable to implement, trying to reduce the surface even if i cannot fully remove it.


As I understand it, this is too brittle. I think this is trivially defeated if someone adds an append to your code:

    func do_another_important_thing(key []byte) []byte {
       newKey := append(key, 0x0, 0x1) // this might make a copy!
       return newKey
    } 

    key := make([]byte, 32) 
    defer scramble(&key) 
    do_another_important_thing(key)
    // do all the secret stuff

Because of the copy that append might do, you now have 2 copies of the key in data, but you only scramble one. There are many functions that might make a copy of the data given that you don't manually manage memory in Go. And if you are writing an open source library that might have dozens of authors, it's better for the language to provide a guarantee, rather than hope that a developer that probably isn't born yet will remember not to call an "insecure" function.


Yep thats what i had in mind


This proposal is worse because all the valuable regions of code will be clearly annotated for static analysis, either explicitly via a library/function call, or heuristically using the same boilerplate or fences.


Makes sense basically creating an easy to point out pattern for static analysis to find everything security related.

As another response pointed out, its also possible that said secret data is still in the register, which no matter what we do to the curr value could exist.

Thanks for pointing it out!


> Makes sense basically creating an easy to point out pattern for static analysis to find everything security related.

This is essentially already the case whenever you use encryption, because there are tell-tale signs you can detect (e.g., RSA S-Box). But this will make it even easier and also tip you off to critical sections that are sensitive yet don't involve encryption (e.g., secure strings).


I could imagine code that did something like this for primatives

  secretStash := NewSecretStash()
  pString := secretStash.NewString()
  ....
  ....
  secretStash.Thrash()
yes, you now have to deal in pointers, but that's not too ugly, and everything is stored in secretStash so can iterate over all the types it supports and thrash them to make them unusable, even without the gc running.


I used to see this is bash scripts all the time. It’s somewhat gone out of favor (along with using long bash scripts).

If you had to prompt a user for a password, you’d read it in, use it, then thrash the value.

    read -p “Password: “ PASSWD
    # do something with $PASSWD
    PASSWD=“XXXXXXXXXXXXXXXXXX”
It’s not pretty, but a similar concept. (I also don't know how helpful it actually is, but that's another question...)


Thats even better than what i had in mind but agree also a good way to just scrumble stuff unusable ++


I'm now wondering with a bit of unsafe, reflection and generics magic one could make it work with any struct as well (use reflection to instantiate a generic type and use unsafe to just overwrite the bytes)


it's only in theory, in practice not going to happen.

In most of the cases, authorization servers are called from each microservice.

evented systems loopback and it's difficult to avoid it, e.g.: order created -> charge -> charge failed -> order cancelled


are we doomed yet?

Seems not yet with 5.2


Don't underestimate AWS.

AWS can make it seamless, so you can run open source models on their hardware.

See their ARM based instances, you rarely notice you are running on ARM, when using Lambda, k8s, fargate and others


this assumes generated code is always correct and does exactly what's needed.


Same for MCP - there is always a chance an agent will mess up the tool use.

This kind of LLM’s non-determinism is something you have to live with. And it’s the reason why I personally think the whole agents thing is way over-hyped - who need systems that only work 2 times out of 3, lol.


The fraction is a lot higher than 2/3 and tool calls are how you give it useful determinism.


Even if each agent has 95% reliability, with just 5 agents in the loop the whole thing is just 77% reliable.


Well fortunately that's not what actually happens in practice.


if it lasts even 1 year with on avg 1 min audio, I think it's nice disposable device to have.

Some ideas if you have an app which can be integrated to other services:

* I feel sick today, notify my manager about it, probably I will stay home

* schedule a task to pickup a trash

* something to remember, colleague X told me he is using service A for data clean up

...


I think this benchmark could be slightly misleading to assess coding model. But still very good result.

Yes, SVG is code, but not in a sense of executable with verifiable inputs and outputs.


I love that we are earnestly contemplating the merits of the pelican benchmark. What a timeline.


It's not even halfway up the list of inane things of the AI hype cycle.


But it does have a verifiable output, no more or less than HTML+CSS. Not sure what you mean by "input" -- it's not a function that takes in parameters if that's what you're getting at, but not every app does.


Django is awesome, but I wish there was an easy way to use modern web frameworks with it.

A lot of times it's either through Nextjs/Nuxtjs + Django as an API or complex bundling process which requires a file where you register bundle versions/manifests then another build process which embeds them into template

both are so complex


Django is a modern web framework. It simply doesn't follow the hype around JS SPAs. However, if you really want to, you can of course still render static content + serve a JS framework like Vue to the client, and then have dynamic widgets rendered on the client side.

If you want to build an SPA anyway, then Django is not the right framework to start with though.


I use Django + DRF for SPA. SPA bundle is served from an nginx container, but I don't see any problem building it into the django image either.

What is the right framework for building SPAs?


Probably one, whose basic idea is to make an SPA. Django has all the tools for making a multi page website/application, which you are then not using. There is probably a framework that is based on the idea of making an SPA and that doesn't include the other stuff.


its not only about SPA vs non-SPA.

For example, there is a nice component library, shadcn, of course you can somehow embed it into the project, but to use it productively, you must have a bundler, which is outside of Django ecosystem.

Also, if you take a look at AI generated content, a lot of them are optimized for outputting JS for frontend, try embedding it in Django project, its non-trivial


Have you tried Django with Inertia.js?

https://inertiajs.com/


Django + AlpineJS + HTMX is pretty nice.


That's just for the HTML content though. What if you want to add some non-trivial Javascript or generated CSS? Or maybe you want to integrate a frontend tool like Storybook[0] even if your HTML is rendered server-side? Maybe add some tests for your frontend code? There is much more between raw hand-rolled HTML/CSS/JS and a full-blown SPA.

At my day job we use Django with HTMX and Alpine, but we also generate the custom CSS from Pico[1] and use JinjaX[2] to define server-side components which we then render in Storybook. We use Vue as our bundler to compile the JS and CSS as well as to run Storybook. The project has to live in both the Python ecosystem and the Node.js ecosystem.

Even with just HTMX and Alpine you might want to compile a custom version of those with certain plugins, or you might want to load them as libraries in your own scripts.

[0] https://storybook.js.org/ [1] https://picocss.com/ [2] https://jinjax.scaletti.dev/


Especially with template partials now in the core of Django 6.


Yes the API process is very complex and then you have to have a team with proficiency in two parallel sets of web technologies -- python vs javascript. That said, the fact that you can go that route means that Django can be a good pick for early-stage projects where you don't need a frontend framework, because there's the optionality to add it later if your project really requires it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: