Hacker Newsnew | past | comments | ask | show | jobs | submit | Lyngbakr's commentslogin

What's the product/use case?

We are a web agency, so client projects that are open to use a new language and our products like:

https://news.ycombinator.com/item?id=46530011


I have a PhD in a similar field to Earth Science and now I'm an engineering team lead at a company in a field related to my PhD. This is the second such role I've held like this and it was very much that I found a perfect position in both cases. I think a key part of being able to combine my domain and technical expertise in a single role was that the jobs were at startups where there's often the need for folks to wear multiple hats. That said, in the past decade since leaving academia I have perhaps seen a handful of such jobs. So, they do exist, but are few and far between, IME.


I was nodding along enthusiastically right up until LLMs and that point we sharply diverge.

For me, part of creating "perfect" software is that I am very much the one crafting the software. I'm learning while creating, but I find such learning is greatly diminished when I outsource building to AI. It's certainly harder and perhaps my software is worse, but for me the sense of achievement is also much greater.


That’s really not the point of the post!

The author is saying that “perfect software” is like a perfect cup of coffee. It’s highly subjective to the end user. The perfect software for me perfectly matches how I want to interact with software. It has options just for me. It’s fine tuned to my taste and my workflows, showing me information I want to see. You might never find a tool that’s perfect for you because someone else wrote it for their own taste.

LLMs come in because it wildly increases the amount of stuff you can play around with on a personal level. It means someone finally has time to put together the perfect workflow and advanced tools. I personally have about 0 time outside of work that I can invest in that, so I totally buy the idea that LLMs can really give people the space to develop personal tools and workflows that work perfectly for them. The barrier to entry and experimentation is incredibly low, and since it’s just for you, you don’t need to worry about scale and operations and all the hard stuff.

There is still plenty of room for someone to do it by hand, but I certainly don’t have time to do that. So I’ll never find perfect software for some of my workflows unless I get an assist from LLMs.

I agree with you about learning and achievement and fun — but that’s completely unrelated to the topic!


Thanks for this. This is exactly the spirit in which I wrote it.

You hit on the key constraint: time. The point isn't that the use of LLMs specifically provides agency, but that it lowers the barrier, allowing us to build things that bring it. "Perfect software" is perfect not just because of what they do, but because of what it lacks (fluff, tracking, features we don't need).


I find that most of the time, programming is just procrastination, and having the LLM there breaks through that procrastination and lets me focus on the idea I was thinking on without going into the weeds.

A lot of the time, the LLM outputs the code, I test my idea, and realize I really don't care or the idea wasn't that great, and now I can move on to something else.


I hope at some point people don't feel the need to justify using or not using LLMs. If you feel like using them, use them. If you regret doing that, delete the code and write it yourself. And vice versa - if you are in a slog and an LLM can get you out, just use it.


You have to break down the problem into manageable chunks anyway, might as well feed those into a code agent while you write it yourself. If you don't like what it did explain what it did wrong. Shouldn't take long to figure out what parts are the biggest waste of time to write yourself. Do still have to hop tools and adjust your confidence as they improve.


I love this answer

I really do


What!

Why is this, idunno a better way to say it, good?

So ok you don't get into the weeds and you're proud of that, but also nothing you can think of wanting to do turns out to be worth doing.

Those things are wholly related. Opportunity never comes exactly the time and the way you expect. You have to be open to it, you have to be seeking out new experiences and new ideas. You have to get into the weeds and try things without being entirely sure what the outcome might be, what insight you might gain, or when that insight might become useful.


A friend of mine has a hilarious method for breaking though procrastination. His one trick is to spend money on the task/project, buy all kinds of things to make the job easier. It has to be useful but it is more about paying the unlock fee.

Github is full of half forgotten saved games waiting for money to be thrown at them.


I'm now using an LLM to write a voice note organisation application that I have been dreaming about for two decades.

I did vibe code the first version. It runs, but it is utterly unmaintainable. I'm now rewriting it using the LLM as if it were a junior or outsourced programmer (not a developer, that remains my job) and I go over every line of application code. I love it, I'm pushing out decent quality code and very focused git commits. I write every commit message myself, no LLM there. But I don't even bother checking the LLM's unit and integration tests.

I would have never gotten to this stage of my dream project without AI tooling.


> I would have never gotten to this stage of my dream project without AI tooling.

Why not? People have been writing successful personal projects without LLMs for years.


Not grandparent, but I'm in the same boat. I've been dreaming for almost 10 years of building a sort of digital bullet journal. I had some feeble attempts to start, but never got to the point where I could actually use it. Last year I started again, heavily LLM assisted. After 1-2 weeks (this was before agents), I had something usable, from which I could benefit, which wanted to make me improve it more, which made me want to use it more.

By now it's grown to 100k lines of code. I've not read all of them, but I do have a high level overview of the app, I've done several refactorings to keep it maintainable.

This would not have happened without AI agents. I don't have the time, period. With AI agents, I can kickoff a task while I'm going to the park with my kids. Instead of scrolling HN, I look every now and then to what the agent is doing.


> By now it's grown to 100k lines of code

Did you add an extra zero there? A journal with 100k lines of code, presumably not counting the framework it is built on?

That doesn't sound correct.


So, it's a personal pet project, I've thrown in everything and the kitchen sink. There's a telegram integration so I can submit entries via telegram, there's a chatbot integration so that I can "talk to my entries" and ask questions about what I did when). It imports weather data, Garmin data, and so on.

So yes, it's around 100k lines of code (Python, HTML, JS and CSS).


  > With AI agents, I can kickoff a task while I'm going to the park with my kids. Instead of scrolling HN, I look every now and then to what the agent is doing.
How does that work? Are you running the agents on a server? Are you using gnu screen and termux? Can you respond to prompts asking for permission to e.g. run ls or grep?


I can do that (via something like VibeTunnel), but usually I just use the Claude Code web/mobile app.


All the big providers offer this. Usually they just work on your Github repo.


I see. So you're running an agent on a server against your github repo. Not working on your local machine. Thanks.


I have at least two projects that I estimated to take a week or two but aren't finished after years. There might be others that just got abandoned that should be included in the count.

Then there are things that work but aren't polished enough or should really have documentation.


Why did you abandon them? Every time I ask this question, I get lots of sob stories, but not a single explanation.


I can’t (due to other priorities) give consistent time to a project unless it is very important. That lack of consistency means I have to spend time re-learning what I was thinking and doing which is both inefficient and not fun. Since the projects are either experimental or not that important, I’m generally more motivated to do something else.

Over time I’ve learned to not even start such projects, but LLMs have made it easier to complete such projects by making the work faster reducing the time variable in time over importance and easing the refamiliarization problem, adding to the set of such projects I’m willing to tackle.


lack of character, distracted by other things for to long, drowning in unforeseen complexity, much slower progression than expected, bored with it, force majeure, etc


That is a fair distinction.

However, I don’t think using LLMs has to be an all-or-none proposition. You can still choose to build the parts you most care about yourself (where the learning happens) and delegate the other aspects to AI.

In the case of the text justifier, it was a small nuisance I wanted solved with very little effort. I didn't care about the browser APIs, just the visual outcome, so I let the LLM do it all.

If I were building something more complex, I would use LLMs much more mindfully. The value is in having the choice to delegate the chores so you can focus on the craft where it matters to you.

While we might value the process differently, the broader point remains that these tools enable people to build things they otherwise wouldn't have the time or specific resources to create, and still feel a sense of agency and ownership.


What's alarming to me is this:

I remember some of the early phases of home computing. The whole point of owning a home computer was that in addition to using other people's software, you could write your own and put the machine to whatever use you could think of. And it was a machine you owned, not time on some big company's machine which, ultimately, was controlled, and uses approved, by that company. The whole point of the home computing market was to create an environment where people managed the machines, not the other way around. (Wozniak has said that this was one of his motivations for creating the Apple I and II.)

Now we have people like this guy who say we finally have autonomy in computing—by purchasing time on some big company's machine doing numberwang to write the software for you. Ultimately the big company, not you, controls the machine and the uses to which it may be put. What's worse is these companies are buying up all the manufacturing capacity, starving the consumer market and making it more difficult to acquire computing hardware! No, this is not the autonomy envisioned by Wozniak, Jobs, or even a young shithead Bill Gates.


Hear, hear. The key word I feel, is autonomy. It's like that article says, the coming war on general-purpose computing. We must seize the means of computation. We've already lost control of mobile phones, whose major operating systems barely allow you to see files, or run software of your own choice. That corporate colonization is coming for the rest of the personal computing stack.

Large language models, the resources and the exploitative means it took to create them, are not "free", they have serious social costs and loss of personal freedom. I still use them, particularly local models, but even that is questionable. At least when the AI bubble bursts and the inevitable enshittification begins, I will be able to continue running them without further vendor lock-in or erosion of privacy.

In terms of bootstrappability and supply chain risk, LLMs fail because we the people are not able to re-create them from scratch.


Hard agree.

The first time I saw a computer, I saw a machine for making things. I once read a quote from actor Noel Coward who said that television was "for appearing on, not watching", and I immediately connected it to my own relationship with computers.

I don't want an LLM to write software or blog posts for me, for the same reason I don't want to hire an intern to do that for me: I enjoy the process.

Everything else, I'm in agreement on. Writing software for yourself - and only for yourself - is a wonderful superpower. You can define the ergonomics for yourself. There's lots of things that make writing software a little painful when you're the only customer: UX learning curves flatten, security concerns diminish a little, subscription costs evaporate...

I actually consider the ability to write software for yourself a more profound and important right over anything the open source movement offers. Of course, I want an environment which makes that easier, so it's this that makes me more concerned about closed ecosystems.


I definitely made software for me with zero desire to learn, zero learning happening, just to scratch an itch.

that being said calling it "perfect" is on the nose, at least for my own, it does a thing, it does it good enough, and that's all. It could be better but it won't be because it's not worth it, because it's good enough


Are you writing software for the sense of accomplishment or to create software you wish you had?


The two aren't mutually exclusive.


Conversely, one is not necessarily a prerequisite for the other.


You can still create what you already know how to, by hand, but also extend to areas you previously where shy about with the help of LLMs.


Just today I gave an LLM the task of porting some Python modules to rust. I then went back and learned enough rust to understand these modules. This would have taken me days without the LLM. And I learned a lot.


Sometimes it’s nice to have other people cook you a tasty meal.


IMO LLMs/AI alone neither make nor break anything.


While I certainly found it insightful, I felt like this book (like so many in the genre) was a pamphlet's worth of material inflated to fill about 250 pages.


I recently put Alpine with i3 on a Raspberry Pi 4 Model B and I'm super impressed with how snappy it is. I find it much better even than Raspberry Pi OS Lite.


Same here, I put it on two very old RPi 1 and was amazed at how low the footprint is. I wish there were images available for other SBCs as well, mostly Allwinner based ones (OrangePi, NanoPi, etc); probably I did something wrong but building them from scratch turned out more complicated than expected.


Yep, Alpine works well. A GUI can be tricky, though. And none of the RasPi tools (e.g. `raspi-config`) will run because of the different libc.

So, running it on a Pi 5 CM in an IO board, there's no way to tell the Pi what device to boot from.


In a submission to OCaml, when asked why the files he submitted list someone else as an author he says,

    > Beats me. AI decided to do so and I didn't question it.°
I find that sort of attitude terrifying.

° https://github.com/ocaml/ocaml/pull/14369#issuecomment-35573...


I cannot believe it's not trolling


Yeah, either this guy's totally insane or it could even be somebody who's an AI skeptic who's just flooding projects with really dumb PRs just to show the risks and get people skeptical about the use of AI in open source (Takes on my folie hat)


That is a curious take. Open source projects were flooded by dumb PRs before AI too, so what would it prove?


Intentional or not its an interesting social experiment.


that's a grifter doing grifting. there was a thread on /g/ about this guy the other day, anons digged out much of its past as a failure / grifter in many areas, running away with the money at the first problem


I'd be willing to put money that before LLMs they were all in ok crypto


When looking at this history here on HN, he started out in the poker world. I'm not sure if he played, but he wrote a poker engine or something. In my experience, the venn diagram for professional poker players, crypto enthusiasts and grifters have a lot of overlap.

But for this guy specifically there's practically complete radio silence during the crypto era. It's only recently with all the AI noise that he's become active here on HN again.


I was a bit disappointed to discover that this was essentially an R vs. Python article, which is a data science trope. I've been in the field for 20+ years now and while I used to be firmly on team R, I now think that we don't really have a good language for data science. I had high hopes for Julia and even Clojure's data landscape looks interesting, but given the momentum of Python I don't see how it could be usurped at this point.


It is EVERYWHERE. I recently had to interview a bunch of data scientists, and only one of them knew SQL. Surely, all of then worked with python. I bet none of them even heard of R.


SAS > R > Python.

The focus of SAS and R were primarily limited to data science-related fields; however, Python is a far more generic programming language, thus the number of folks exposed to it is wider and thus the hiring pool of those who come in exposed to Python is FAR LARGER than SAS/R ever were, even when SAS was actively taught/utilized in undergraduate/graduate programs.

As a hiring leader in the Data Science and Engineering space, I have extensive experience with all of these + SQL, among others. Hiring has become much easier to go cross-field/post-secondary experience and find capable folks who can hit the ground running.


you beat me to it. i understand why sas gets hate but I think that comes with simply not understanding how powerful it is.


It was a great language, but it was/is extremely cost-prohibitive plus it simply fell out of favor in academia, for many of the same reasons, and thus was supplanted by free alternatives.


Yikes. Were they experienced data scientists or straight out of school? I find it very odd (and a bit scary) that they didn't know SQL.


Experienced Data Scientists and/or those straight out of school are EXTREMELY lacking in valuable SQL experience and always have been. Take a DS with 25 years experience in SAS, many of them are great with DATAstep, but have far less experience using PROC SQL for querying the data in the most effective way--even if they were pulling the data down with pass-through via SAS/ACCESS.

Often they'd be doing very simplistic querying and then manipulating via DATAstep prior to running whatever modeling and/or reporting PROCs later, rather than pushing it upstream into a far faster native database SQL pull via pass-through.

Back in 2008/2009, I saved 30h+ runtime on a regular report by refactoring everything in SQL via pass-through as opposed to the data scientists' original code that simply pulled the data down from the external source and manipulated it in DATAstep. Moving from 30h to 3m (Oracle backend) freed up an entire FTE to do more than babysit a long-running job 3x a week to multiple times per day.


What would it even mean to be a "good language for data science"?

In the first place data science is more a label someone put on bag full of cats, rather than a vast field covered by similarly sized boxes.


SAS has entered the chat


I certainly agree with your take on Buddhism, but I often find that sage advice is buried amongst spiritual waffle in Buddhist books.


But that's what religion is, wisdom and nonsense mixed together by people who didn't yet have the benefit of the great filter to separate the wisdom from the nonsense: science.


The sheer number of annoying twats who tell me that The Book of I Ching holds great secrets has defnitely helped obscure any great secrets it contains.


It's like a package manager on steroids!

When I tried using Gleam, I loved that it came with all the basic tooling I needed and that's what I think is so wonderful about Lux. I don't want to spend my time fiddling around with setting up all the individual tools — I just want to write code. For me, Lux makes the broader experience around building Lua projects a lot more enjoyable.


I’ve come to using turboLua as my main Lua ‘Swiss army tool’, since it comes with so many things built-in, on top of a fairly functional luajit 2.0.

https://turbo.readthedocs.io/en/latest/

If I can get lux to deal with the package management scenarios around a few turboLua projects, I’m pretty sure I’m going to ship much more Lua code next year.


Out of curiosity, what in a broad sense is configured in those 10 lines? (For context: I'm a Helix user trying out terminal-based vim at the moment and I'd like to know what the most important tweaks are.)


some basic ones like:

syntax on

set number

set expandtab

set tabstop=4

set modelines=0

set nowrap

and then a few related to ctags.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: