Hacker Newsnew | past | comments | ask | show | jobs | submit | shade's commentslogin

I have a 2023 Crosstrek, my wife has a '21 Ascent. I have the same habit you do - edging away from large trucks slightly - and both of them do the same thing you described to me.

It's essentially that Subaru's lane system actually has two levels: it has lane keeping where it's just trying to keep you inside the lines, and then on top of that it also has lane centering which is pretty much what it says.

Just a note for you or anyone reading who has a recent Subaru and doesn't know already: if you find the centering really bothersome, you should be able to be able to go into the settings on the instrument cluster display (up/down arrows at the lower left behind the wheel, toggle it until you get to the "hold for settings" option), find the Eyesight settings, and turn off lane centering. It will still try to keep you inside the lane markers but won't try to park you right in the center of the lane. In that mode, it's more like the Honda Sensing system I had on my 2016 Civic.

I go back and forth a bit on it but mostly keep it in lane centering mode now - I've gotten used to how it positions the car in the lane, and it lets me focus more on what's going on around me than micromanaging lane position and such.


> It's essentially that Subaru's lane system actually has two levels: it has lane keeping where it's just trying to keep you inside the lines, and then on top of that it also has lane centering which is pretty much what it says.

Same with Hyundai except they call them "Lane Keeping Assist" (LKA) and "Lane Following Assist" (LFA) and I have trouble remembering which one centers you and which one just keeps you from leaving the lane.

To me just based on the names I'd have expected keeping to be the one that actively positions you (it keeps you centered) and following to the one that just reacts when you are going to depart the lane (it keeps you following the lane).

Mostly now I just remember that the one that comes on automatically any time I'm going 40+ mph is the reactive one, and the one that I have to explicitly turn on is the centering one (although both come on automatically on certain highways based on data from the navigation system).


idk whether subaru is exact the same as hyundai but i basically turned lane centering off on my hyundai. when possible i only use radar cruise control, and lane follow. if i want to overtake, turn on my signal and it'll automatically safely increase speed to set cc speed and let the lane follow off. it's pretty seamless.

lane centering is a bit too annoying for me, i need to keep my hands on the wheels anyway.


> i need to keep my hands on the wheels anyway.

Alignments off. Not as bad as it used to be to get it done.


I'm deaf, so I test a lot of speech to text and transcription apps from an accessibility point of view.

My answer to "why have a monthly subscription" would be that you need capabilities that Whisper doesn't handle well, like real-time transcription in noisy environments.

That's not the niche you're targeting here, though. :)

My experience is that Whisper - not being built for real time speech to text - isn't as good at it as other tools are. You can hack something together by stacking together progressively more audio frames to feed to Whisper to give it context, but IME, you're going to get better results from a model that's designed for real-time STT in the first place, or by using a service like Azure Speech to Text which has excellent noise resilience... but which is also an ongoing cost which would justify a subscription. Real-time Whisper also devours your battery quickly.

That said - while I've had very good experiences with Parakeet in MacWhisper, I'm curious if you evaluated Apple's SpeechAnalyzer APIs at all. It's unfortunately limited macOS/iOS/iPadOS 26+ since it's a new API, but it's on device, has comparable quality of results to Whisper Large v3 Turbo and Parakeet, and seems to be better on battery usage.


Yep, I'm also deaf (since age 6), went through a lot of speech therapy, and have a very pronounced deaf accent. I live in the midwestern US (specifically, Ohio) and at least once a year I get asked where I'm from - England being the most common guess, but I've also had folks ask if I'm Scottish or Australian.

AI struggles massively with my accent. I've gotten the best results out of Whisper Large v2 and even that is only perhaps 60% accurate. It's been on my todo list to experiment with using LLMs to try to clean it up further - mostly so I can do things like dictate blog post outlines to my phone on long car rides - but I haven't had as much time as I'd like to mess around with it.


Yeah, I've been deaf for over 40 years now and captioning glasses are something that I've wanted ever since I was a kid. I'm not a particularly big fan of Meta and I have some serious reservations around privacy that need to be satisfied, but at the same time it's really exciting to see this going from "pie in the sky thing I dreamed about having when I was ten" to "actual existing product."

There's a few other companies/startups working on this too, but a lot of the glasses they're producing are very ugly. There's a couple that didn't look bad, but from what I'm seeing Meta's are a combination of the best-looking ones and best display so far, and I'll be very curious to see the reviews.


One of my weird hobbies is radar chasing storms, and all of that stuff is completely normal. NEXRAD is very sensitive, especially when it's in clear air mode (it has different modes depending on if it's raining in the area) and can pick up things like dust, birds, bats, and insects. There's also ground clutter from things like buildings, wind farms, and even cars.

The National Weather Service has a good brief explainer: https://www.weather.gov/iwx/wsr_88d

They also have an interesting PDF covering some of the more unique signatures you might see, though it's not exhaustive: https://www.weather.gov/media/btv/research/Radar%20Artifacts...


Does clear air mode pick up wildfire smoke? There has been an awful lot of that lately over the US from Canada and the West Coast.


> In the last 10 years has technology actually made my life better?

In my case? Yes, absolutely. Automatic speech to text is now cheap or free, ubiquitous across most platforms (even Linux!), and generally very effective. Total game changer to my ability to participate in meetings at work and in society generally.


Like I said, there are fringe cases as always.


I would say it's cool in the sense that building anything is cool, but I find myself mostly in agreement with your take, although with a caveat.

I can't find the quote now, but someone (I think simonw?) said that they feel a bit of an obligation to spend at least as much time working on writing something as it would take to read it, and I agree with that... if you want me to spend time reading your post, I'd like to know you actually made an effort on it.

For me, writing is thinking, and helps me refine my thinking, so I don't use AI to assist writing process. I agree with the comments that AI writing tends to have a specific voice, and I don't care for that voice and don't want my writing to come across that way.

Where I do find it useful in writing, however, is as an editing pass in an advisory role. I don't ask it to rewrite anything for me, but I will ask it to double-check for excessive passive voice, tone, does it raise unanswered points, etc. I typically write my draft posts in Zed, and use Zed's AI chat panel to throw a request at Claude. The big thing though is not blindly accepting every suggestion the AI makes - I read them, think about it, and sometimes adjust the post based on that feedback. It's a useful sanity checking step and while a real human editor would be preferable, I can't justify the cost to hire an editor for my little blog that probably gets zero hits most days. :)


I’m on board with your caveat. I have no issues with using AI to analyze the writing to identify potential issues. I currently use spell/grammar checking and I have no qualms about that. A more robust proofreader does seem useful.

To your point, I’d want to constrain such a review to identifying structural or consistency issues vs. the AI getting involved with the subject matter itself.

The issue I ran into the few times I wrote something and asked ChatGPT to read it and identify any issues was that it was all too eager to tell me how to massively restructure things. The result read like typical AI slop. This was a prompting issue on my part because I gave it instructions that were too open ended. Careful/restrictive prompting is definitely necessary to make this viable.


Yup. I grew up in Hancock Co and used to ride occasionally with the bike club there, had a couple of days when the ride out was brutal because of persistent, endless wind, but then the ride back was awesome for the same reason. :)


Yep, I'm in the exact same situation as you.

The tools for in-person are getting better, but aren't frictionless to set up and sometimes require you to spend time futzing with getting your iPad or iPhone to actually see an external microphone. I don't know if Android is better about this or not, unfortunately. I would _hope_ that interviewers would extend people a bit of grace about this, but who knows.

As an aside - I saw your post on Apple Live Captions, and completely agree with you. I've been slowly adding to a collection of reviews of various captioning tools, and was _very_ critical of some of the choices Apple made there.


I’d love to read this collection of reviews!


I have the OG 13" MBP M1, and it's been great; I only have two real reasons I'm considering jumping to the 14" MBP M4 Pro finally:

- More RAM, primarily for local LLM usage through Ollama (a bit more overhead for bigger models would be nice)

- A bit niche, but I often run multiple external displays. DisplayLink works fine for this, but I also use live captions heavily and Apple's live captions don't work when any form of screen sharing/recording is enabled... which is how Displaylink works. :(

Not quite sold yet, but definitely thinking about it.


The M1 Max supports more than one external display natively, which is also an option.


I don't think it's niche. It's the reason why I and multiple of my coworkers waited till M1 Pro with buying one.

I'm definitely still happy with it, but job offers upgrade to M4 so... why not?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: