Yeah, do crons even work consistently for GitHub Actions? I tried to set one up the other day and it just randomly skipped runs. There were some docs that suggested they’re entirely unreliable as well.
Hooo boy where do I begin? Dependency deadlocks are the big one - you try to share resource attributes (eg ARN) from one stack to another. You remove the consumer and go to deploy again. The producer sees no more dependency so it prunes the export. But it can't delete the export, cause the consumer still needs it. You can't deploy the consumer, because the producer has to deploy first sequentially. And if you can't delete the consumer (eg your company mandates a CI pipeline deploy for everything) you gotta go bug Ops on slack, wait for someone who has the right perms to delete it, then redeploy.
You can't actually read real values from Parameters/exports (you get a token placeholder) so you can't store JSON then read it back and decode (unless in same stack, which is almost pointless). You can do some hacks with Fn:: though.
Deploying certain resources that have names specified (vs generated) often breaks because it has to create the new resource before destroying the old one, which it can't, because the name conflicts (it's the same name...cause it's the same construct).
It's wildly powerful though, which is great. But we have basically had to create our own internal library to solve what should be non-problems in an IaC system.
Would be hilarious if my coworker stumbled upon this. I know he reads hn and this has been my absolute crusade this quarter.
> The producer sees no more dependency so it prunes the export. But it can't delete the export, cause the consumer still needs it. You can't deploy the consumer, because the producer has to deploy first sequentially. And if you can't delete the consumer (eg your company mandates a CI pipeline deploy for everything) you gotta go bug Ops on slack, wait for someone who has the right perms to delete it, then redeploy.
This is a tricky issue. Here is how we fixed it:
Assume you have a stack with the ConstructID of `foo-bar`, and that uses resources exported to `charlie`.
Update the Stack ConstructID to be a new value, ie `foo-bar-2`. Then at the very end of your CI, add a `cdk destroy foo-bar` to delete the original stack. This forces a new deployment of your stack, which has new references. Then, `charlie` updates with the new stack and the original `foo-bar` stack can be safely destroyed once `charlie` successfully updates.
The real conundrum is with data - you typically want any data stacks (Dynamo, RDS, etc) to be in their own stack at the very beginning of your dependency tree. That way any revised stacks can be cleanly destroyed and recreated without impacting your data.
> Dependency deadlocks are the big one - you try to share resource attributes (eg ARN) from one stack to another. You remove the consumer and go to deploy again. The producer sees no more dependency so it prunes the export.
I’m a little puzzled. How are you getting dependency deadlocks if you’re not creating circular dependencies?
Also, exports in CloudFormation are explicit. I don’t see how this automatic pruning would occur.
> Deploying certain resources that have names specified (vs generated) often breaks
CDK tries to prevent this antipattern from happening by default. You have to explicitly make it name something. The best practice is to use tags to name things, not resource names.
> I’m a little puzzled. How are you getting dependency deadlocks if you’re not creating circular dependencies?
> Also, exports in CloudFormation are explicit. I don’t see how this automatic pruning would occur.
I explained that. It's a quirk of how it tree-shakes, if nothing dereferences the attribute, it deletes the export. And yes it'll automatically create an export if you do something like
> CDK tries to prevent this antipattern from happening by default. You have to explicitly make it name something. The best practice is to use tags to name things, not resource names.
I'm well aware but i'm fighting a ton of institutional inertia at my work.
I'll just echo the other poster with "deadlocks". It's obscene how slow CF is, and the fact that its failure modes often leave you in a state that feels extremely dangerous. I've had to contact AWS Support before due to CF locking up in an irrecoverable way due to cycles.
I've Fairphone 6 with e/OS. Works well. Ok, some caveats:
- mobilepay does not work (I think Danes have an issue with non-mainstream platforms for whatever reason)
- the default browser does not work for some of the authentication flows when it integrates as the in-app browser. But it does give one dark theme on hn so I can just keep it on my homescreen while fireflx is the default for compat reasons.
I think that was much more a cooperative agreement type situation than childish threats like we have now.
I'm not opposed to changes in territory in principle... but there's no principles involved in the current US administration acting out like a fragile child.
Threats are always a part of negotiations. There was also a proposal to trade Greenland for 1/3 of the Philippines (which the US got from Spain just for showing up to a war that nobody wanted).
It's always kind of drove me nuts that the "Hide" and "Flag" buttons are right next to each other. The number of times I've fat-fingered Flag by mistake and had to go to my Flagged Submissions and remove it...
Second the Nix approach. One can even build a github actions-compatible container out of a flake and have actions run in it. I have done so for my personal projects https://github.com/anttiharju/compare-changes
Unless you've got something sanitizing env prints in your logs, no, they don't get masked. I've leaked secrets accidentally while debugging environment variable issues.
Of course the bigger side of the issue is that anyone who can run code in the CI context can view all data in the CI context. So if an untrusted dev can write a build script that executes with secrets in the environment, they can leak those secrets. CI is RCE-as-a-service, you implicitly give permission for anyone who can edit build scripts to run arbitrary code. Like you say, they can echo the secret to a file & upload it elsewhere, and they can also use it directly from the edited build script.
Yes. I've done it. That's why there is an "approve and run" button when the PR comes from another repo. So that the maintainers can check if the PR is trying to exfiltrate secrets... or do other bad things of course.
The whole point of the github action secrets is that they are accessible by the CI when needed. Which means they are accessible. It's just a convenient way to store the secret in such a way that it is not completely public. But you still have to trust github and your maintainers.
In the past week I have seen:
- actions/checkout inexplicably failing, sometimes succeeding on 3rd retry (of the built-in retry logic)
- release ci jobs scheduling _twice_, causing failures, because ofc the release already exists
- jobs just not scheduling. Sometimes for 40m.
I have been using it actively for a few years and putting aside everything the author is saying, just the base reliability is going downhill.
I guess zig was right. Too bad they missed builtkite, Codeberg hasn't been that reliable or fast in my experience.
reply