Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its not Github Actions' fault but the horrors people create in it, all under the pretense that automation is simply about wrapping a GitHub Action around something. Learn to create a script in Python or similar and put all logic there so you can execute it locally and can port it to the next CI system when a new CTO arrives.




I think in this case they hate the fact that they cannot easily SSH into the failing VM and debug from there. Like "I have to edit my workflow, push it, wait for it to run and fail, and repeat".

Yep, my company moved onto GH Actions a few years ago and this was probably the single biggest pain point. But also the whole system just feels awkward and annoying to work with. It feels like a classic Microsoft product that would never get traction from a standalone company but because it's part of Microsoft/GitHub and is "good enough" lots of people put up with it.

I had the weirdest thing happen a few days ago.. and only seemed to be happening in the GH runner for a PR.... somehow a function was duplicated in the action runner, and not in local or anwhere else... no idea how the corruption happened... it literally took me hours or pushing minor changes to try to correct the issue... I finally cat'd that file contents out and yep, the function was duplicated... no idea how. Had to create a new branch, copy the changes and then it worked.|

Still no idea what happened or how to ever fix/prevent it again.


Obviously, by doing everything inside a container, where you control things. The GH action should only start the container, and pass it some env vars.

CircleCI had this exact feature

One can get the ssh access with self-hosted runners but it is problematic because uncovering secrets becomes trivial.

Uncovering secrets is usually trivial. `printenv` in a build script does that pretty reliably.

What do you mean? Simple env prints get masked as *** in logs

I guess one can always just echo the secret to a file and upload-artifact it


Like masking would work against anything but accidents.

  printenv | base64

Unless you've got something sanitizing env prints in your logs, no, they don't get masked. I've leaked secrets accidentally while debugging environment variable issues.

Of course the bigger side of the issue is that anyone who can run code in the CI context can view all data in the CI context. So if an untrusted dev can write a build script that executes with secrets in the environment, they can leak those secrets. CI is RCE-as-a-service, you implicitly give permission for anyone who can edit build scripts to run arbitrary code. Like you say, they can echo the secret to a file & upload it elsewhere, and they can also use it directly from the edited build script.


Secrets that you store in github actions secrets?

Yes. I've done it. That's why there is an "approve and run" button when the PR comes from another repo. So that the maintainers can check if the PR is trying to exfiltrate secrets... or do other bad things of course.

The whole point of the github action secrets is that they are accessible by the CI when needed. Which means they are accessible. It's just a convenient way to store the secret in such a way that it is not completely public. But you still have to trust github and your maintainers.


No, it is github's fault. They encourage the horrors because they lead to vendor lock in. This is the source of most of Microsoft's real profit margins.

This is probably why they invented a whole programming language and then neglected to build any debugging tools for it.


> Learn to create a script in Python or similar and put all logic there so you can execute it locally and can port it to the next CI system when a new CTO arrives.

That's a perfectly good several weeks of time un-estimateable busywork I could be getting paid for!


The hard parts are things like deploying artifacts, publishing releases, caching, etc. that are CI-specific and can't be run locally.

This!

All discussions so far seem to be around building artifacts which is the easy part. How to make the full CI/CD pipeline work with GHA is a pain. GHA has no state mechanism easily used, you have to create it all yourself.


This is giving "Debian systemd units call their old init.d scripts" energy but I kind of like it

systemd units that are small, simple, and call into a single script are usually fantastic. There's no reason for these scripts to be part of another init system; but making as much of your code completely agnostic to the env it runs in sounds good regardless. I think that's the feeling you're feeling.

It's 100% GA's fault and they do it on purpose.

if you can't run it locally, it's shit



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: