Odd, I don't see any mention of subprocess.run, the workhorse of python scripting.
Quick rundown for the unfamiliar:
Give it a command as a list of strings (e.g., subprocess.run(["echo", "foo"]).)
It takes a bunch of flags, but the most useful (but not immediately obvious) ones are:
check=True: Raise an error if the command fails
capture_output=True: Captures stdout/stderr on the CompletedProcess
text=True: Automatically convert the stdout/stderr bytes to strings
By default, subprocess.run will print the stdout/stderr to the script's output (like bash, basically), so I only bother with capture_output if I need information in the output for a later step.
Also `asyncio.subprocess`, which lets you manage multiple concurrently running commands. Very handy if you need to orchestrate several commands together.
By default (1) captures stdout and stderr of all processes and (2) create tty for processs stdout.
Those are really bad defaults. The tty on stdout means many programs run in "interactive" rather then "batch" mode: programs which use pager get output truncated, auto-colors may get enabled and emit ESC controls into output streams (or not, depending on user's distro... fun!). And captured stderr means warnings and progress messages just disappear.
For example, this hangs forever without any output, at least if executed from interactive terminal:
from sh import man
print(man("tty"))
Compare to "subprocess" which does the right thing and returns manpage as a string:
Can you fix "sh"? sure, you need to bake in option to disable tty. But you've got to do it in _every_ script, or you'll see failure sooner or later. So it's much easier, not to mention safer, to simply use "subprocess". And as a bonus, one less dependency!
(Fun fact: back when "sh" first appeared, everyone was using "git log" as an example of why tty was bad (it was silently truncating data). They fixed it.. by disabling tty only for "git" command. So my example uses "man" :) )
`sh` is nice but it requires a dependency. No dependencies is nicer IMHO. uv makes this way easier but for low dependency systems, or unknown environments stdlib is king.
I love how this import trick shows how hackable Python is - and it’s this very hackability that has led to so many of the advances we see in AI. Arguably without operator overloads we’d be 5 or more years behind.
I think the point is that for most things, you don't need to call any external tools. Python's standard library comes already with lots of features, and there are many packages you can install.
In JS, there are microtasks and macrotasks. setTimeout creates macrotasks. `.then` (and therefore `await`) creates microtasks.
Microtasks get executed BEFORE macrotasks, but they still get executed AFTER the current call stack is completed.
From OP (and better illustrated by GP's example) Python's surprise is that it's just putting the awaited coroutine into the current call stack. So `await` doesn't guarantee anything is going into a task queue (micro or macro) in python.
That doesn't make sense. That would mean the awaiting function doesn't have access to the result of the Promise (since it can proceed before the Promise is fulfilled), which would break the entire point of promises.
Oh yes! This works with other intensifiers as well. "Crazy good", "wicked bad", "mad smart", etc. To my ears, eliding the -ly changes the meaning from the literal reading, to specifically the intensifier reading.
Goodness, yes. The last time I put (genuinely constructive) criticism in a peer evaluation, it turned out to be the only non-positive thing that was said about that coworker. So it became a focus of his yearly review.
He later told me about how his review went (casually at a conference; he had no idea I was the source), and I fessed up and clarified what I actually meant. The HR process had twisted it to a much more extreme version of what I was getting at, completely undermining the utility of the feedback.
Nowadays, I'm just gonna give perfect scores and if I have feedback that needs to be given, I'll just tell the coworker directly. (And if I'm not comfortable doing that, then the feedback probably isn't important enough.)
I think a big factor of that is that usually most people just do the positive feedback and don't say anything negative or constructive. So when someone does do so, it's seen as "wow, this must be so bad that they just had to say something, no matter how delicately or toned-down it is being phrased as". These days I just mention the problems and concerns to the people making the decisions because yearly review time is the wrong time to do it. At best they've only been doing this "bad" thing for a month or so, and at worst almost a whole year and no one did anything.
How do you know that interviewees aren't spending more time on it?
Because you can't guarantee all candidates are spending the same amount of time, it becomes a game theory problem where the candidates will typically lose in some form. In many cases, the right answer is to spend extra time making a really polished (but not too polished!) solution and pretend like you stayed in the time limit. And every candidate is either a) doing that, or at least b) worried that their competition is doing that.
Even if we ignore that dynamic, 3 hours is a long ass time for a candidate to spend when they're not even sure they'll get to talk to another human about it.
In a 1-hour interview, you can run a candidate through a programming exercise and be guaranteed they're not wasting extra time on it. And if they happen to prefer doing take home assessments, you can always let them send you an updated answer later. (But often by the time a candidate asks me if they can do that, I've already developed a favorable view of their skills and can tell them, "go for it if you want, but you've already 'passed' my test.")
By keeping the candidate-interviewer time investment the same, you guarantee that you're respecting the candidate's time as you would your own (because you're sitting there with them.) I can help them skip over the parts I'm not interested in (e.g., by feeding them info they'd be able to find via search or telling them not to worry about certain details.)
If a hiring manager doesn't respect their candidates' time, how likely are they to respect their employees' time?
Yep, this is what I am taking from this thread: Next time I am given a take-home, I am going to ask them to promise, that I will get to talk to a human about it. They can of course straigh-ass lie about it, and I am sure I will run into such abysmal behavior at some point.
> How do you know that interviewees aren't spending more time on it?
In some cases we roughly timed it, scheduling an email for a time the candidate wanted and asked them to return it 3 hours later. In some cases we just treated it as an honour system. We made it clear that the task was intended to take about that much time and that spending more time was not allowed/encouraged.
In reality, we found that good candidates took ~1-2h, and in some cases where candidates spent a lot longer and owned up to it, we found no improvements. In one case a candidate submitted at 3h and then again at 8, and we marked the 8h version 1 mark lower.
Great advice, but my case is different because our framework is REALLY hurting us.
/s
It's wild how easy it is to fall into this trap. IMO, if you're considering switching frameworks (especially for perf reasons), your time would be better served by getting parts of your app off framework completely (assuming there's truly no way to get the results you want in your current framework.)
I wonder if Bacardi might be a better analogue for what TSMC gets from this deal.
Bacardi started a distillery in Puerto Rico (iirc, to sell in the US without tariffs) well before the Cuban Revolution. When the Cuban government seized Bacardi's assets, they were able to move everything to their other sites in Puerto Rico and Mexico.
As you point out, I highly doubt this deal moves the needle on whether or not US provides military aid to Taiwan. But it does help give them more options if the situation in Taiwan becomes untenable.
If you used leading whitespace you could wind up with something like old basics where you had to number your lines. It was great, you'd always skip 10 between lines so if you had a bug you could stuff a patch in between existing lines at *5.
For the whitespace, you'd have to know how deeply to indent the outermost part of your code.
So if you add an if block to a for loop, every line of the code has to be indented and only the contents of the new if would be at indent of 0.
I am not going to write this pseudocode in AntiLang because I am not that much of a masochist.
for ( foo in somearray )
doStuff(foo)
end
becomes
for ( foo in somearray )
if ( condition )
doStuff(foo)
end
end
If you antilanged this the rest of the way, you could have a common `start` to indicate the start of a block and then replace the `end with the actual conditional.
start
doStuff(foo)
for( foo in somearray )
This gets horrible pretty quick. So as terrible as trailing line space is, leading line space is quite possibly worse.
Which leads me to the ultimate conlusion -
White space should be balanced. If you need 4 spaces indent, you also need 4 spaces on the end of the string.
The idea of a trailing whitespace sensitive language is... amazing.
Get real torturous with it and make the amount of whitespace the line number, ala BASIC. 3 trailing spaces is line 3 for instance. Tab character counts as powers of ten separator? Space space space tab space space is line 32?
Quick rundown for the unfamiliar:
Give it a command as a list of strings (e.g., subprocess.run(["echo", "foo"]).)
It takes a bunch of flags, but the most useful (but not immediately obvious) ones are:
By default, subprocess.run will print the stdout/stderr to the script's output (like bash, basically), so I only bother with capture_output if I need information in the output for a later step.