Hacker Newsnew | past | comments | ask | show | jobs | submit | smetj's commentslogin

Generally speaking, the value of these contributions was determined by "proof of work". Time and effort are precious to a human hence its a somewhat self-regulating system preventing huge amounts of low quality contributions being generated. This is now gone. Isn't that an interesting problem to fix?


Beautiful! Nice work.

Thanks. I really appreciate it.

Be it as it may, its aesthetics are so distinct it isn't for everybody. Also a big part of the target audience expecting to buy an utility vehicle have cheaper, proven and more practical alternatives. I guess the fact its not road legal in the EU doesn't help either whilst other Tesla models are quite popular there.

Its not Github Actions' fault but the horrors people create in it, all under the pretense that automation is simply about wrapping a GitHub Action around something. Learn to create a script in Python or similar and put all logic there so you can execute it locally and can port it to the next CI system when a new CTO arrives.

I think in this case they hate the fact that they cannot easily SSH into the failing VM and debug from there. Like "I have to edit my workflow, push it, wait for it to run and fail, and repeat".

Yep, my company moved onto GH Actions a few years ago and this was probably the single biggest pain point. But also the whole system just feels awkward and annoying to work with. It feels like a classic Microsoft product that would never get traction from a standalone company but because it's part of Microsoft/GitHub and is "good enough" lots of people put up with it.

I had the weirdest thing happen a few days ago.. and only seemed to be happening in the GH runner for a PR.... somehow a function was duplicated in the action runner, and not in local or anwhere else... no idea how the corruption happened... it literally took me hours or pushing minor changes to try to correct the issue... I finally cat'd that file contents out and yep, the function was duplicated... no idea how. Had to create a new branch, copy the changes and then it worked.|

Still no idea what happened or how to ever fix/prevent it again.


Obviously, by doing everything inside a container, where you control things. The GH action should only start the container, and pass it some env vars.

CircleCI had this exact feature

One can get the ssh access with self-hosted runners but it is problematic because uncovering secrets becomes trivial.

Uncovering secrets is usually trivial. `printenv` in a build script does that pretty reliably.

What do you mean? Simple env prints get masked as *** in logs

I guess one can always just echo the secret to a file and upload-artifact it


Like masking would work against anything but accidents.

  printenv | base64

Unless you've got something sanitizing env prints in your logs, no, they don't get masked. I've leaked secrets accidentally while debugging environment variable issues.

Of course the bigger side of the issue is that anyone who can run code in the CI context can view all data in the CI context. So if an untrusted dev can write a build script that executes with secrets in the environment, they can leak those secrets. CI is RCE-as-a-service, you implicitly give permission for anyone who can edit build scripts to run arbitrary code. Like you say, they can echo the secret to a file & upload it elsewhere, and they can also use it directly from the edited build script.


Secrets that you store in github actions secrets?

Yes. I've done it. That's why there is an "approve and run" button when the PR comes from another repo. So that the maintainers can check if the PR is trying to exfiltrate secrets... or do other bad things of course.

The whole point of the github action secrets is that they are accessible by the CI when needed. Which means they are accessible. It's just a convenient way to store the secret in such a way that it is not completely public. But you still have to trust github and your maintainers.


No, it is github's fault. They encourage the horrors because they lead to vendor lock in. This is the source of most of Microsoft's real profit margins.

This is probably why they invented a whole programming language and then neglected to build any debugging tools for it.


> Learn to create a script in Python or similar and put all logic there so you can execute it locally and can port it to the next CI system when a new CTO arrives.

That's a perfectly good several weeks of time un-estimateable busywork I could be getting paid for!


The hard parts are things like deploying artifacts, publishing releases, caching, etc. that are CI-specific and can't be run locally.

This!

All discussions so far seem to be around building artifacts which is the easy part. How to make the full CI/CD pipeline work with GHA is a pain. GHA has no state mechanism easily used, you have to create it all yourself.


This is giving "Debian systemd units call their old init.d scripts" energy but I kind of like it

systemd units that are small, simple, and call into a single script are usually fantastic. There's no reason for these scripts to be part of another init system; but making as much of your code completely agnostic to the env it runs in sounds good regardless. I think that's the feeling you're feeling.

It's 100% GA's fault and they do it on purpose.

if you can't run it locally, it's shit

Its simple. If you require specific software which only runs on Windows, then you cannot use Linux. If that's not the case, the Linux desktop these days is entirely capable of helping you achieve your goals.

it is happening, just not everywhere at the same time at once

Where are the products then? Otherwise it's just marketing

At work, I was involved in a project where a large number of individual tasks defined as declarative code had to be translated into JS based equivalents. Due to the unpredictability of each task we would have to do this pretty much manually, one by one. I would estimate at minimum 2 months of grunt work for 4 entry level engineers. Thanks to coding agents and LLMs we were able to achieve this task in a week. Quality of the end result is top notch.

If that's not a product ... then I don't know what it is.

- What was the state of AI/LLMs 5 years ago compared to now? There was nothing.

- What is the current state of AI/LLMs? I can already achieve the above.

- What will that look like 5 years down the road?

I you haven't experienced first-hand a specific task before and after AI/LLMs, I think its indeed difficult to get insight into that last question. Keep in mind that progress is probably exponential, not linear.


task automation != replacing engineers. Automating some focused specific tasks has been part of our job forever. On the other hand it's been 5 years that software devs won't be needed anymore, let's see in another 5 years, if you're so sure about your prediction please adivse on some lottery numbers, thanks

Well ... IMO this is literally replacing (entry-level) engineers, but lets agree to disagree on that. Be it as it may ... task automation is also "a product" then not? 5 years ago, this wasn't possible. Now it is, so extrapolate that to the future ...

ps: If you can guarantee the Powerball lottery continues forever, I can give you a guaranteed winning combination.


you don't see the products because not all AI-assisted dev products are AI wrappers. These products look like regular software, both internal company tools and external customer facing ones.

There are people all over the place building stuff that would've either never been built, or would've required a paid dev++.

I built a whole webshop with an internal CRM/admin panel to manage ~150 products. I built a middleware connecting our webshop to our legacy ERP system, smth that would be normally done by another software company.

I built a program with a UI that makes it super easy for us to generate ZPL code and print labels using 4 different label printers automatically with a simple interface, managed by an RPi.

I have built custom personal portfolio websites for friends with Gemini 3 in hours for free, smth that again would've cost money for dev or some crappy WP/Squarespace templates.

As the other user said, the progress/changes are not distributed evenly, and are impossible to quantify.

But to me whose main job is not programming (but who knows how to code) but running a nom-software business, the productivity gains are very obvious, as is the fact that because of LLMs I have robbed developers of potential work.


the world does not need more shitware. We need medical advances, scientific breakthroughs and societal shift to improve wellbeing of all people. these things are much harderthan writing shitty sofware and we will need not the current AGIs(Goggle Gemini 3 Pro and ChatGPT 5.2 Thinking) but ASI to solve them.

Wellbeing of people includes being productive with Windows maybe for doing medical research, not uninstall it for Linux beucase it became a bloated unstable hell

The people with money aren’t funding any of those however

This has not been my experience at all ... I run KDE + Fedora or Ubuntu on laptops for years as my daily driver doing professional work. Its an absolute joy to work with and stable. If there's a hickup then its because some unrelated process is consuming all memory or hogging all CPUs (Slack, Teams I'm looking at you) which would crash any desktop.


I get regular "crashes" on the newest Fedora KDE on a new Thinkpad X1 (from this year). I say "crashes" because it’s not the window manager or Wayland session crashing but some non-essential component of the Plasma desktop (don’t remember which one right now), so it doesn’t affect my work at all. From my point of view it basically just causes a crash report popup every 1-2 hours and says whatever service crashed has been restarted.


For me it happens when I forget Wireshark running for a bit too long...


Somehow, somewhere there is a pleasant balance between DRY and non-DRY which is different for everybody. God forbid having a colleague who sees a thing repeating and slaps an abstraction over it at whatever cost because DRY!


Optimising for "code is easy to change" hits that balance well enough.

Start with a loose bag of functions. These are easy to compose in to larger pieces of functionality. And, this is key, easy to decompose when things change. Once this WET bag becomes a chore to change, the right abstraction might just show itself for DRY-ing out.


Exactly! Linkportals and webrings ...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: