Because even though at work it looks like you’re tasked with creating use values, you’re only there as long as the use values you create can be exchanged in the market for a profit. So every humane drive to genuinely improve your work will clash with the external conditions of your existence within that setting. You’re not there to serve people, create beautiful things, solve problems, nu-uh. You’re there to keep capital flowing. It’s soulless.
To think that “non-profit” work is actually non-profit work is just to not have grasped the nature of labor. You have to ask yourself: Am I producing use values for the satisfaction of human needs or am I working on making sure the appropriation of value extraction from the production of use values continues happening?
In some very extreme cases, such as in the Red Cross or reformist organizations, your job looks very clear, direct, and “soulful”. You’re directly helping desperate people. But why have people gotten into that situation? What is the downstream effect of having you helping them. It’s profit. It’s always profit. You’re salvaging humanity for parts to be bought and sold again. It doesn’t make a dishonest work. It’s just equally soulless.
Your argument appears to be that if you redefine all of humanity to be mere grist for a capitalist machine, you can then redefine any altrustic act, as a measure to extract more profit.
I don’t define anything. The truth is just that there’s no profit extraction without charity work. I’ve done lots of it. If you’ve done it, you know too.
As dark as it may seem to strip romantism out of which you call humanity, not only there isn’t a just salary for those who bear the weight of the machine, but also there’s isn’t even a salary per se.
If for you humanity is just doing seemingly nice guy work without question, call me a monster.
My adversary has accused me of sophistry. As if I’m just a crafter of kaleidoscopes. I’m just giving back the compliment by calling out their romanticism.
Charity work can bring momentary fulfillment to a person. I’m not reducing humanity by situating it within the machine. You even have the right to reject the material proposition that charity work is a piece that composes the totality of the machine. But eventually all truth will be self evident, so let’s leave it to the reader.
The point is that he may not using AI in any shape or form, Regardless, AI scrapes its work without explicit consent and then spits it back in "polished" soul free form.
> I was recently trying to analyse a 600mb exe (denuvo/similar). I wasted a week after ghidra crashed 30h+ in multiple times.
During the startup auto analysis? For large binaries it makes sense to dial back the number of analysis passes and only trigger them if you really need them, manually, one by one. You also get to save in between different passes.
Yup. It was actually an openjdk crash, which was extra interesting.
I figured I probably could remove some passes, but being a lite user I don't really know/didn't want to spend the time learning how important each one is and how long they take. Ida's defaults were just better.
Agree. IDA is surely the “primary” tool for anything that runs on an OS on a common arch, but once you get into embedded Ghidra is heavily used for serious work and once you get to heavily automation based scenarios or obscure microarchitectures it’s the best solution and certainly a “serious” product used by “real” REs.
IDA is the better tool if you're being paid to work with architectures that IDA supports well (ARM(64), x86(_64), etc). This usually means 'mainstream' security/malware research. It's not worth the price for hobbyists. Before Hex-Rays was sold to private equity, it could make sense for rich hobbyists to pay for a private license once and use it for a few years without software updates, with the cloud offering now it pretty much makes no sense.
Ghidra is the better tool if you're dealing with exotic architectures, even ones that you need to implement support for yourself. That's because any architecture that you have a full SLEIGH definition for will get decompilation output for free. It might not be the best decompiler out there, sure, but for some architectures it's the only decompiler available.
Both are generally shit UX wise and take time to learn. I've mostly switched from IDA to Ghidra a while back which felt like pulling teeth. Now when I sometimes go back to IDA it feels like pulling teeth.
It's also not about lack of support, but the fact that you have to pay extra for every single decompiler. This sucks if you're analyzing a wide variety of targets because of the kind of work you do.
IDA also struggles with disasm for Harvard architectures which tend to make up a bulk of what I analyze - it's all faked around synthetic relocations. Ghidra has native support for multiple address spaces.
I really want to like Binary Ninja, but whenever I have the choice between not paying (Ghidra), paying for something that I know works (IDA) and paying for something that I don't know if it works (Binja) then the last option has always lost so far.
Maybe we need to get some good cracked^Wcommunity releases of Binja so that we can all test it as thoroughly as IDA. The limited free version doesn't cut it unfortunately - if I can't test it on what I actually want to use it for, it's not a good test.
(also it doesn't have collaborative analysis in anything but the 'call us' enterprise plan)
How so? Why would a couple of months break in employment (worst case, if I truly become unemployable for some reason until I learn the tools) harm or destroy my career?
Yeah. 100% this. Remember, Ceph's storage nodes are called OSD, as in Object Storage Daemon.
The biggest reasons to not use Ceph are:
- You plan on using <=30 disks on <=3 computers. Some of the petabyte scale stuff is just in the way at that scale. Even more so if you're looking at a single computer.
- You don't have at least fractional ops staff that will look at a dashboard regularly. In this world, you're better off using a cloud service.
- You're 100% satisfied with the S3 interface and will never want anything else than its write-once objects. Ceph's writable object support won't gain you anything in that world, and genuinely makes the distributed systems problems involved much harder. Ceph was architected as a distributed filesystem and excels as a networked block store for virtual machines. Ceph can do S3-compatibility, but another implementation can cut off a big chunk of the functionality and provide just S3-compat, and simpler can be better.
I doubt many software projects mentioned in this conversation have gone through the extensive stress testing with glitching nodes while trying to maintain performance that Ceph has; simple systems can be too simple. Ceph is quite well battle-hardened by now. Ceph's dedicated QA hardware pool is likely bigger than many competing projects have tried as a cluster size!
You're holding it wrong. I just spent 14 hours (high on coke) working with Claude to generate an agent orchestration framework that has already increased my output to 20x over just using Copilot. Adapt or you'll be left behind and forever part of the permanent underclass.
CDDOL is undoubtedly the future, it is just sad seeing all these negative comments. It's like those people don't even know they've been made redundant already.
It's not too late to jump on the Cocaine-Driven Development Orchestrated by LLMs train.
Amen! I'm pissing blood faster than I can increase my credit card limit for token use, but we'll make it. The 200x (10x from LLM + 20x from orchestration) means that by the end of 2026 we'll all be building $1MM ARR side projects daily.
I built a windmill with Claude. I created a skills.md and followed everything by the book. But now, I have to supply power to keep the windmill running. What am I doing wrong?
Either way, OP is holding it wrong and vague hypebro comments like yours don't help either. Be specific.
Here's an example: I told Claude 4.5 Opus to go through our DB migration files and the ORM model definitions and point out any DB indexes we might be missing based on how the data is being accessed. It did so, ingested all the controllers as well and a short while later presented me with a list of missing indexes, ordered by importance and listing why each index would speed up reads and how to test the gains.
Now, I have no way of knowing how exhaustive the analysis was, but the suggestions it gave were helpful, Claude did not recommend over-indexing, and considered read vs write performance.
The equivalent work would have taken me a day, Claude gave me something helpful in a matter of minutes.
Now, I for one could not handle the information stream of 20 such analyses coming in. I can't even handle 2 large feature PRs in parallel. This is where I ask for more specifics.
Why go through all migration files if you're looking for missing indices in the present? That doesn't seem to make sense when you could just look at the schema as it stands? Either way, why would this take you a day? How many tables do you have?
This one is a lot harder to tell because there are some AI bros who claim similar things but are completely serious. Even look at Show HN now: There used to be ~20-40 posts per day but now there are 20 per HOUR.
(Please oh please can we have a Show HN AI. I'm not interested in people's weekend vibe coded app to replace X popular tool. I want to check out cool projects wher people invested their passion and time.)
I'm being completely serious, but what is the current fav open source forum software these days? I'd love to host a forum for a small community I'm involved in. Not a stranger to hosting other things across a variety of stacks, so I'm not particular about technology used.
Seems to work okay in general. I'm not a big fan of the gamified notification system it seems to have - whenever I sign up for an instance, it'll send me things like "Super reader achievement unlocked! You read 10 threads." or whatever. I suppose it can be turned off since it's OSS.
I work at Discourse. As a regular user, if you want to prevent these new user badges (and notifications), head to /u/yourusername/preferences/interface and check "Skip new user onboarding tips and badges".
It is in our plans to eventually rework how this new user education and notification system works, and I suppose eventually with https://id.discourse.com/ the intent would be that your preferences follow you to every Discourse site you sign up for, so you could just set it once.
As an admin, badges can be disabled entirely, or individually.
It's their way of attempting to fight user churn. Forums need all the help they can get in that regard given the attention economy of today and the giants they're attempting to fight against. Anything novel is a win.
I responded to the other comment, but I work at Discourse. As a site admin you can disable badges (which is our gamification system) entirely, or you can get rid of individual badges.
I would recommend fighting the 'oh it needs to FEEL premium' feeling. It doesn't, you're just spoile by companies who know exactly how to sell you crap specially designed to just feel nice. You're being played like a fiddle by marketing departments.
reply