Hacker Newsnew | past | comments | ask | show | jobs | submit | cstejerean's commentslogin

the problem with gastown is it tries to use agents for supervision when it should be possible to use much simpler and deterministic approaches to supervision, and also being a lot more token efficient

I strongly believe we will need both agentic and deterministic approaches. Agentic to catch edge cases & the like, deterministic as those problems (along with the simpler ones early on) are continually turned into hard coded solutions to the maximum extent possible.

Ideally you could eventually remove the agentic supervisor. But for some cases you would want to keep it around, or at least a smaller model which suffices.


how long ago was this past? A review with latest models should absolutely catch the issue you describe, in my experience.

Ah, "It's work on my computer" edition of LLM.

December. Previous job had cursor and copilot automatically reviewing PRs.

Well of course, we basically expanded the ASD definition to cover a wide range in order to ensure that everyone gets access to support if needed, but in the process turned "autism" into a grab bag of different conditions which makes discussions about it difficult because everyone is talking about something else.


It's kinda like mental health issues go through their own hype-cycles. Autism now is where ADHD was in the late 90s / early 2000s. I say this as someone who went through the Ritalin treatment a long time ago, but that was after my parents were at their wits end tried drugging me with gravol. If that were today, it'd probably be some neurospicy autism diagnosis and treatment.


Why do I have a feeling that most of that $30B was spent on paying for consultants, most of which were also essentially making things up as they went along.


Are there really that many consultants floating around still? I remember the heyday of the 2000s and kind of thought the outside consultants were largely disappearing and nowadays companies have trained themselves to jump on the hype train without paying consultants to push them?


I don't know?

For some reason, I'm thinking most of the money went to either inferencing costs or NVidia.


Playing against an AI that's really dumb gets boring quickly. Playing against an AI that's way too good gets annoying quickly.

I want an AI that can play like a human at my level would, such that the game is competitive and fun.


You probably don’t though. It’s actually really unfun to lose 50% of matches against an AI, or worse, because it doesn’t get tired or tilted or distracted.

It’s much more fun to go against an AI that is dumber than you but generally more powerful.


Different kinds of AI are likely fun for different players. Games have difficulty levels partly because not everyone wants the same level of difficulty relative to their own skill level. Some may want something easily beatable for them, some may want something difficult for them to beat.


It's unfun if the AI feels like it's cheating.

In Counter Strike the AI can be super dumb and slow until progressively it becomes a dumb aimbot. It doesn't become a better player with gamesense ans tacticts, just raw aim (Try arms race if you wanna feel it yourself)

In Fear the AI is pretty advanced in a more organic way. It coordinates several enemies to locate you and engage with you in a way that sometimes feels totally human. That feels great and when you lose you try again thinking something like "I should have positioned myself better" instead of "I guess my aim is just not fast enough".

We just don't get enough of good AIs to know how good they can feel.


Not only that, some of the problem with addiction were directly caused by the dosage guidelines for oxycontin. They really wanted it to be a 12h drug, but it really isn't and it wears off after about 8 hours. Rather than admitting this and giving a smaller dosage more frequently they doubled down by using a larger dose and trying to keep with the 12h schedule.

This combination or larger dose followed by mild withdrawal then results in a higher likelihood to become addicted to opioids. So not only they marketed it heavily and got more people on opioids than necessary, they did it in a way that maximizes the likelihood of addiction.

https://www.latimes.com/projects/oxycontin-part1/


> the Waymo ADS’s perception system assigned a low damage score to the object;

and Tesla would do better how in this case? It also routinely crashes into stationary objects, presumably because the system assumes it wouldn't cause damage.


> and Tesla would do better how in this case? It also routinely crashes into stationary objects, presumably because the system assumes it wouldn't cause damage.

Are the Teslas in the room with you right now?

Please point out in my comment where I mentioned Tesla. I can wait.


Completely agree. It's been 18 years since Nvidia released CUDA. AMD has had a long time to figure this out so I'm amazed at how they continue to fumble this.


10 years ago AMD was selling its own headquarters so that it could stave off bankruptcy for another few weeks (https://arstechnica.com/information-technology/2013/03/amd-s...).

AMD's software investments have begun in earnest a few years ago, but AMD really did progress more than pretty much everyone else aside from NVidia IMO.

AMD further made a few bad decisions where they "split the bet", relying upon Microsoft and others to push software forward. (I did like C++ Amp for what its worth). The underpinnings of C++Amp led to Boltzmann which led to ROCm, which then needed to be ported away from C++Amp and into CUDA-like Hip.

So its a bit of a misstep there for sure. But its not like AMD has been dilly dallying. And for what its worth, I would have personally preferred C++ Amp (a C++11 standardized way to represent GPU functions as []-lambdas rather than CUDA-specific <<<extensions>>>). Obviously everyone else disagrees with me but there's some elegance to parallel_for_each([](param1, param2){magically a GPU function executing in parallel}), where the compiler figures out the details of how to get param1 and param2 from CPU RAM into GPU (or you use GPU-specific allocators to make param1/param2 in the GPU codespace already to bypass the automagic).


Nowadays you can write regular C++ in CUDA if you so wish, and contrary to AMD, NVidia employs several WG21 contributors.


CUDA of 18 years ago is very different to CUDA of today.

Back then AMD/ATI were actually at the forefront on the GPGPU side - things like the early brook language and CTM lead pretty quickly into things like OpenCL. Lots of work went on using the xbox360 gpu in real games for GPGPU tasks.

But CUDA steadily improved iteratively, and AMD kinda just... stopped developing their equivalents? Considering a good part of that time they were near bankruptcy it might have not have been surprising though.

But saying Nvidia solely kicked off everything with CUDA is rather a-historical.


AMD kinda just... stopped developing their equivalents?

I wasn't so much that they stopped developing, rather they kept throwing everything out and coming out with new and non backwards compatible replacements. I knew people working in the GPU Compute field back in those days who were trying to support both AMD/ATI and NVidia. While their CUDA code just worked from release to release and every new release of CUDA just got better and better, AMD kept coming up with new breaking APIs and forcing rewrite and rewrite until they just gave up and dropped AMD.


> CUDA of 18 years ago is very different to CUDA of today.

I've been writing CUDA since 2008 and it doesn't seem that different to me. They even still use some of the same graphics in the user guide.


Yep! I used BrookGPU for my GPGPU master thesis, before CUDA was a thing. AMD lacked followthrough on yhe software side as you said, but a big factor was also NV handing out GPUs to researchers.


10 years ago they were basically broke and bet the farm on Zen. That bet paid off. I doubt a bet on CUDA would have paid off in time to save the company. They definitely didn't have the resources to split that bet.


It's not like the specific push for AI on GPUs came out of nowhere either, Nvidia first shipped cuDNN in 2014.


None of the tech companies are selling your data to advertisers. They allow advertisers to target people based on the data, but the data itself is never sold. And it would be dumb to sell it because selling targeted ads is a lot more valuable than selling data.

Just about everyone else other than the tech companies are actually selling your data to various brokers, from the DMV to the cellphone companies.


> None of the tech companies are selling your data to advertisers.

First-hand account from me that this is not factual at all.

I worked at a major media buyer agency “big 5” in advanced analytics; we were a team of 5-10 data scientists. We got a firehose on behalf of our client, a major movie studio, of search of their titles by zip code from “G”.

On top of that we had clean roomed audience data from “F” of viewers of the ads/trailers who also viewed ads on their set top boxes.

I can go on and on, and yeah, we didn’t see “Joe Smith” level of granularity, it was at Zip code levels, but to say FAANG doesn’t sell user data is naive at best.


> we didn’t see “Joe Smith” level of granularity, it was at Zip code levels

So you got aggregated analytics instead of data about individual users.

Meanwhile other companies are selling your name, phone number, address history, people you are affiliated with, detailed location history, etc.

Which one would you say is "selling user data"?


The problem is you're limited to 24 GB of VRAM unless you pay through the nose for datacenter GPUs, whereas you can get an M-series chip with 128 GB or 192 GB of unified memory.


Surely! The point is that they're not million times faster magic chips that makes NVIDIA bankrupt tomorrow. That's all. A laptop with up to 128GB "VRAM" is a great option, absolutely no doubt about that.


They are powerful, but I agree with you, it's nice to be able to run Goliath locally, but it's a lot slower than my 4070.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: