A real innovation from the Bitcoin world! There are several physical password store systems that they have suggested for this kind of use case. The simplest is basically using a nail to punch out a password onto a piece of sheet metal.
There is of course a huge difference between left and right, but the democratic party is actually center-right, so...
Previous poster didn't say there's no difference between left and right, they said both parties are bought and paid for by fascists, which is pretty much true, thanks to Citizens United v FEC which passed the last time democrats had control of Congress and the presidency. Congress could have responded, but didn't.
At this time, democrats had 60 (!) seats in the senate, enough to end a filibuster, and they had to negotiate with MODERATE DEMOCRATS to pass the ACA. Moderate democrats are, on the face of things, the reason the ACA doesn't have a public option.
Don't get me wrong, I still vote democrat any chance I get, and would encourage everyone to do the same, but unfortunately I have to do it despite the fact they're bought and paid by the donor class, which are, by and large, fascists.
Democrats should started Jan 7th by screaming for Trump's arrest and not stopping until he was rotting in jail, but all we got was 4 years of nothing, followed by "too bad, so sad, we did everything we could".
This is one of those times where technically correct isn’t the best kind of correct.
Ok yeah fine there are fascists in both parties. Now that we have that out of the way where are we? Oh, right. The same fucking place. Stop wasting everyone’s time with the soft apologetics.
We have a system that moves slowly at a national level by design. One party is hellbent on tearing that down in favor of literal (techno-)fascism. The other wants to maintain the incremental refinement of our democracy. That’s it. One party is literally promising Nazi Germany while the other is offering the potential of the United States of America.
So sure, when someone mentions Alex Preti’s murder or the literal Gestapo or the Epstein Files or unprecedented corruption or the irreparable harm to our international standing or the economic ruin that will take generations to heal or any of the other atrocities just tell them that Anthony Weiner was a creep. You won’t be wrong!
We don’t need incremental refinement now because we are facing an existential threat. The long term promise is a stable democracy. That’s the whole experiment.
We need to hold our noses on the Democrats’ historical performance because the whole party needs to be rebuilt. Instead of fixating on past failures focus on the progressive voices that grow every day.
- Intermediate tasks are cached in a docker-like manner (content-addressed by filesystem and environment). Tasks in a CI pipeline build on previous ones by applying the filesystem of dependent tasks (AFAIU via overlayfs), so you don't execute the same task twice. The most prominent example of this is a feature branch that is up-to-date with main passes CI on main as soon as it's merged, as every task on main is a cache-hit with the CI execution on the feature branch.
- Failures: the UI surfaces failures to the top, and because of the caching semantics, you can re-run just the failed tasks without having to re-run their dependencies.
- Debugging: they expose a breakpoint (https://www.rwx.com/docs/rwx/remote-debugging) command that stops execution during a task and allows you to shell into the remote container for debugging, so you can debug interactively rather than pushing `env` and other debugging tasks again and again. And when you do need to push to test a fix, the caching semantics again mean you skip all the setup.
There's a whole lot of other stuff. You can generate tasks to execute in a CI pipeline via any programming language of your choice, the concurrency control supports multiple modes, no need for `actions/cache` because of the caching semantics and the incremental caching feature (https://www.rwx.com/docs/rwx/tool-caches).
The previous post describes a problem where you do a large docker build, then fan out to many jobs which need to pull this image, and the overhead is enormous. This implies rwx has less overhead. Just saying that there’s content addressable cache doesn’t explain how this particular problem is solved.
If you have a dockerfile where you make a small change in your source results in one particular very large layer that has to be built, then you want to fan out and run many parallel tests using that image, what actually happens when you try to run that new fat layer on a bunch of compute, and how is it better than the implied naive solution? That fat layer exists on a storage system somewhere, and a bunch of computer nodes need to read it, what happens?
There's three main things we do to solve this, all of which relate to the fact that we have our own (OCI-compatible) container runtime under the hood instead of using Docker.
1. We don't gzip layers like Docker does. Gzip is really slow, and it's much slower than the network. Storage is cheap. So it's much faster to transmit uncompressed layers than to transmit compressed layers and decompress them.
2. We've heavily tuned our agents for pulling layers fast. Disk throughput and IOPS are really important so we provision those higher than you typically would for running workloads in the cloud. When pulling layers we modify kernel parameters like the dirty_ratio to values that we've empirically found with layer pulls. We make sure we completely exhaust our network bandwidth and throughput when pulling layers. And so on.
3. This third one is experimental and something we're actively working on improving, but we have our own underlying filesystem which lazily loads the files from a layer instead of pulling tons of (potentially unneeded) files up front. This is similar to AWS's [Seekable OCI](https://github.com/awslabs/soci-snapshotter) but tuned for our particular needs.
I've been slowly working on improving our documentation to explain these kinds of differentiators that our architecture and container runtime provide, but most of it is unpublished so far. We definitely need to do a much better job of explaining _how_ we are faster and better rather than just stating it :).
The other side of this is that we also made _building_ those layers much much faster. We blogged a little bit about it at https://www.rwx.com/blog/we-deleted-our-dockerfiles but just to hit some quick notes: in RWX you can vary the compute by task, and it turns out throwing a big machine at (e.g.) `npm install` is quite effective. Plus we make using an incremental cache very easy, and layers generated from an incremental cache are only the incremental parts, so they tend to be smaller. And we're a DAG, so you can parallelize your setup in a way that is very painful to do with Docker, even when using multi-stage builds. And our cache registry is global and very hard to mess up, whereas a lot of people misconfigure their Docker caches and have cache misses all over their docker builds. And we have miss-then-hit semantics for caching. Okay, I'm rambling now! But happy to go into more depth on any of this!
Sure, but as noted elsewhere, the IDEs generally don't "do stuff" by default just on opening a file folder. VSCode, by default, will run some programs as soon as you open a folder.
> People buying ads are their real customers, users are there to be exploited.
It's one level further. The global intelligence apparatus is the real customer, and they economically reward those who would build the most-surveillable and/or most-opinion-influencing products and services.
I meant more that what is stopping platforms like Meta from generating a small-ish amount of click fraud, under the guise of the fake user framework they initially setup for kickstarting engagement, to juice their revenue.
I'm not sure if I would call it sarcasm, but it's a reference to a popular computer science joke format.
The first time I saw it:
>There are 10 kinds of people in the world, those who understand binary and those who don't.
The joke is that 10 is how you express 2 in base 2.
I think there is another layer to the joke, though; often in mathematics, computer science, algorithms, and software engineering, things get divided into sets, sets get broken down into two sets according to whether some property about the elements is true or false, and this joke echoes that.
reply