Hacker Newsnew | past | comments | ask | show | jobs | submit | sally_glance's commentslogin

Great idea, except that I don't think it's easy to make sure we don't grant too much power. Basically this idea is the core of representative democracy. Problem is, the people who have been granted a lot of power are very good at finding loopholes to avoid or remove the safeguards we put in place...

There is a trade-off here for sure... I don't agree so much that the goal is to limit power though, but to ensure any power given to leaders is conditional.

I think ideally you want a CEO type leader of a country who has a lot of executive power, but that leader has a board who provides oversight, then ultimately the public are all shareholders who collectively hold the company and it's leaders to account.

I'd argue generally speaking we want to grant more power to our leaders than we do today, but make them much easier to remove and have a well design constitution so certain things are legally impossible in the same way a CEO can't just decide they now have 100% voting rights and no longer need to listen to share holders.

The solution to a bad CEO isn't to have 10 CEOs. The solution is for the shareholders to boot them for a better CEO.


Yeah, for companies this works because there are external (government) entities providing and enforcing a framework. For countries, there is nothing like that. The traditional solution is separation of powers inside of the country I guess, but this requires limiting individual power. Also it's quite complicated in practice and requires a complex legal framework which is sadly often weak to "workarounds" again.

I'm pretty sure at least the better woodworking shop managers and QA people all have experience with woodworking and probably would also consider this their craft if asked.

Cool, I didn't know about this phenomenon. Reading up a little it seems like training multilingual forces the model to optimize it's internal "conceptual layer" weights better instead of relying solely on English linguistics. Papers also mention issues arising from overdoing it, so my guess is even credentialed AI researchers are currently limited to empirical methods here.

In the past these trends were cyclical though. We're coming from an expansion phase (mainly driven by the COVID IT and AI craze) and now going through stagnation towards recession (global manufacturing crisis pulling our service sector down with it). This mirrors the hiring trends (or demand for workers). I'm not sure why you wouldn't expect the pendulum to swing back at some point.

I have been in this industry for a long time since 1996.

The 2000 dot com bust wasn’t because all of the ideas were bad most weren’t. They were too soon and before high speed internet was ubiquitous at home let alone in everyone’s pocket.

Incidentally, back then I was a regular old Windows enterprise developer in Atlanta and there were plenty of jobs available at boring companies.

In 2008 was a general shit show for everone. But for tech, the what we now know as the BigTech companies were hiring like crazy and growing old crazy. Just based on the law of large numbers, they aren’t going to grow over the next decade like they grew over the last decade.

They have proven that they can keep going and keep dominating with less people. AI is already started automating the jobs of mid level ticket takers and it’s only going go get worse. Just like factory jobs aren’t coming back.


Maybe because you remember that statistic about single person households from a couple of days ago, I think Sweden was 3rd place globally? Though it's also a very happy country statistically, maybe something about the geography and upbringing which make that work better than in countries with comparable numbers but worse results (e.g. Germany or Netherlands)...

I also thought this is how skills work, but in practice I experienced similar issues. The agents I'm using (Gemini CLI, Opencode, Claude) all seem to have trouble activating skills on their own unless explicitly prompted. Yeah, probably this will be fixed over the next couple of generations but right now dumping the documentation index right into the agent prompt or AGENTS.md works much better for me. Maybe it's similar to structured output or tool calls which also only started working well after providers specifically trained their models for them.

Are you not worried that the sibling comment is right and the solution to this will be "more AI" in the future? So instead of hiring a team of human experts to cleanup, management might just dump more money into some specialized AI refactoring platform or hire a single AI coordinator... Or maybe they skip to rebuild using AI faster, because AI is good at greenfield. Then they only need a specialized migration AI to automate the regular switchovers.

I used to be unconcerned, but I admit to be a little frightened of the future now.


Well, in general worrying about the future is not useful. Regardless of what you think, it is always uncertain. I specifically stay away from taking part in such speculative threads here on HN.

What's interesting to me though is that very similar promises were being made about AI in the 80s. Then came the "AI Winter" after the hype cycle and promises got very far from reality. Generative AI is the current cycle and who knows, maybe it can fulfill all the promises and hype. Or maybe not.

There's a lot of irrationality currently and until that settles down, it is difficult to see what is real and useful and what is smoke and mirrors.


I'm aware of that particular chapter of history, my master's thesis was on conversational interfaces. I don't think the potential of the algorithms (and hardware) back then was in any way comparable to what's currently going on. There is definitely a hype cycle going on right now, but I'm nearly convinced it will actually leave many things changed even after it plays out.

Funny thing is that meanwhile (today) I've actually been on an emergency consulting project where a PO/PM kind of guy vibecoded some app that made it into production. The thing works, but a cursory audit laid open the expected flaws (like logic duplication, dead code, missing branches). So that's another point for our profession still being required in the near future.


I mean, you can't review or cross reference something that isn't there... So interpreting in good faith, technically, maybe they just forgot to also check for completeness? /s


More like using a calculator but not being able to explain how to do the calculation by hand. A probabilistic calculator which is sometimes wrong at that. The "lots of theory but no practice" has always been true for a majority of graduates in my experience.


Surely, new grads are light on experience (particularly relevant experience), but they should have student projects and whatnot that they should be able to explain, particularly for coding. Hardware projects are more rare simply because they cost money for parts and schools have limited budgets, but software has far fewer demands.


But you've got to admit that OPs tool does it quicker, except if you like to memorize flags or already have a script specifically for this. And it's much nicer on the eye than most (all?) nmap-guis out of the box.


It does it quicker if you already have this tool installed. nmap is everywhere.


The goal has never been to create something that can replace nmap, the goal was to learn more about networking and about building TUIs in Go. Honestly I am quite overwhelmed by the amount of traction it got today, definitely not what I expected.

I am very grateful for all the feedback and suggestions, and I will take my time to evaluate every comment. In the coming weeks I will try to implement most of the feedback and do releases to improve the tool further, thanks everyone!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: