Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why would you ever call setenv after spawning threads though?

Or are there other sneaky calls which will do that behind you back?



...on Windows, single-threaded programs don't really exist; any DLL can, and most of them do, spawn worker threads as an implementation detail. Some of them do it the moment their initializer is being run, so if you link your program against something else than kernel32 and its friends (the basic Windows system libraries don't spawn worker threads on being loaded), then when a thread finally starts executing your executable's entry point there is no guarantee that this is the only thread that exists in your process. And in fact, finding a non-toy, real-world Windows application that has only one thread is almost impossible (for example, IIRC all .NET runtimes have worker-thread pool from the get go so that rules out any .NET executables).

Which is why on Windows there is almost no system APIs (well, almost: there were some weird technical decisions around single-threaded apartments for COM...) that can be safely used only in single-threaded applications.

Maybe in several more decades Linux community will also accept the fact that multi-threaded applications are an entirely normal and inevitable thing, not an aberration of nature that we all best pretend don't exist until we're absolutely forced to deal with their reality.


Well, that's the gotcha isn't it?

It's easy to think about some complex interactive software where the need to call setenv appears only after you have worker threads doing some other thing. Without a warning, you won't know it's a bad thing to do, and the manpage only says that it and unsetenv are not thread safe, as if this was remotely enough information.

What nobody is telling is that the environment is so big that you need it to compress data or open an IPv6 connection. It's not obvious at all that you can't do those things while editing a variable.


There’s always a lot of weird emergent behavior in bootstrapping an app, and on an app of any serious size, I can’t entirely control if someone decides to spool up a thread pool on startup so that everything is hot before listen() happens.

I may think I have control, I may believe that a handful of us are entitled to have that say, but all it takes is someone adding a cross dependency that forces an existing piece of code to jump from 20th position in the load order to 6th and all hell can break loose. Or just as often, set a ticking time bomb that nobody notices until there’s a scaling or peak traffic event or someone adds one more small mistake to the code and foomp! up it goes.


That’s literally explained in the article. It’s worth reading more than the headline.

Ed: actually, that’s even spelled out in the headline.


It’s neither in the headline or in the article. The question was about setenv, not getenv.

It is best to avoid calling setenv in a threaded program. Some programs do it to make space for rewriting argv with large strings (freeing space from *environ which tends to be right after the tail of argv). Some programs or libraries use *environ directly to stage variables for exec before forking. Some want to pass variable changes to forks. There are alternatives possible, but in the context of something like go calling libc setenv, it’s to make interop easier- sadly it may make other interop harder, such as this case.


? setenv not getenv. You'd rarely use setenv, and even then you'd do it at startup.


Right. That's been my experience so far, hence my question.


It's not. OP was asking about setenv, not getenv...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: