Hacker Newsnew | past | comments | ask | show | jobs | submit | d4rkp4ttern's commentslogin

Big fan of handy and it’s cross platform as well. Parakeet V3 gives the best experience with very fast and accurate-enough transcriptions when talking to AIs that can read between the lines. It does have stuttering issues though. My primary use of these is when talking to coding agents.

But a few weeks ago someone on HN pointed me to Hex, which also supports Parakeet-V3 , and incredibly enough, is even faster than Handy because it’s a native MacOS-only app that leverages CoreML/Neural Engine for extremely quick transcriptions. Long ramblings transcribed in under a second!

It’s now my favorite fully local STT for MacOS:

https://github.com/kitlangton/Hex


I installed a few different STT apps at the same time that used Parakeet and I think they disagreed with each other. But Hex otherwise would’ve won for me I think. Wanna reformat the Mac & try again (been a while anyway).

My comment on this from a month back: https://news.ycombinator.com/item?id=46637040


Speaking of audio + AI, here's a "learning hack" I've been trying with voice mode, and the 3 big AI labs still haven't nailed it:

While on a walk with mobile phone + earphones, dump an article/paper/HN-Post/github-repo into the mobile chat app (chat-gpt, claude or gemini), and use voice mode to have it walk you through it conversationally, so you can ask follow up questions during the walk-thru and the AI would do web-search etc. I know I could do something like this with NotebookLM, but I want to engage in the conversation, and NotebookLM does have interactive mode but it has been super-flaky to say the least.

I pay for ChatGPT Pro and the voice mode is really bad: it pretends to do web searches and makes up things, and when pushed says it didn't actually read the article. Also the voice sounds super-condescending.

Gemini Pro mobile app - similarly refuses to open links and sounds as if it's talking to a baby.

Claude mobile app was the best among these - the voice is very tolerable in terms of tone, but like the others it can't open links. I does do web searches, but gets some type of summaries of pages, and it doesn't actually go into the links themselves to give me details.


I have found that the "advanced voice mode" is dumb as a box of rocks compared to their "basic" TTS version, so I disable it. I've switched to Claude, so I don't know if that's still an option, but if you are tied to ChatGPT, definitely disable it if possible!

It's amazing how good open-weight STT and TTS have gotten, so there's no need to pay for Wispr Flow, Superwhisper, Eleven-Labs etc.

Sharing my setup in case it may be useful for others; it's especially useful when working with CLI agents like Code Code or Codex-CLI:

STT: Hex [1] (open-source), with Parakeet V3 - stunningly fast, near-instant transcription. The slight accuracy drop relative to bigger models is immaterial when you're talking to an AI. I always ask it to restate back to me what it understood, and it gives back a nicely structured version -- this helps confirm understanding as well as likely helps the CLI agent stay on track. It is a MacOS native app and leverages the CoreML/Neural Engine to get extremely fast transcription (I used to recommend a similar app Handy but it has frequent stuttering issues, and Hex is actually even faster, which I didn't think was possible!)

TTS: Kyutai's Pocket-TTS [2], just 100M params, and amazing speech quality (English only). I made a voice plugin [3] based on this, for Claude Code so it can speak out short updates whenever CC stops. It uses a combination of hooks that nudge the main agent to append a speakable summary, falling back to using a headless agent in case the main agent forgets. Turns out to be surprisingly useful. It's also fun as you can customize the speaking style and mirror your vibe and "colorful language" etc.

The voice plugin gives commands to control it:

    /voice:speak stop
    /voice:speak azelma (change the voice)
    /voice:speak prompt <your arbitrary prompt to control the style>
[1] Hex https://github.com/kitlangton/Hex

[2] Pocket-TTS https://github.com/kyutai-labs/pocket-tts

[3] Voice plugin for Claude Code: https://pchalasani.github.io/claude-code-tools/plugins-detai...


Same setup I’m using! Parakeet and pocket turbo. It’s feels good enough for daily usage.

Anyone know of something like Hex that runs on Linux?

Handy is cross-platform, including linux

+1 for Handy, it's very easy to get running and once it is you don't have to think about it again.

Is Hex MacOS only?


Indeed. Over a few days of iterations I had this TUI built for fast full-text search of Claude Code or Codex sessions using Ratatui (and Tantivy for the full-text search index). I would never have dreamed of this pre coding agents.

https://pchalasani.github.io/claude-code-tools/tools/aichat/...


Attention is all everyone wants.

I think there’s a level beyond 8: not reviewing AI-generated code.

There’s a lot of discussion about whether to let AI write most of your code (which at least in some circles is largely settled by now), but when I see hype-posts about “AI is writing almost all of our code”, the top question I’m curious about is, how much of the AI-written code are they reviewing ?


Related: I used the amazing 100M-parameter Pocket-TTS [1] model to make a stop-hook based voice plugin [2] that lets Claude Code give a short voice update whenever it stops. The hook quietly inserts nudges to Claude Code to end its response with a short speakable summary, and in case it forgets, it uses a headless agent to create the summary.

It was trickier than I expected, to get it working well: FFMpeg pipe streaming for low-latency playback, a three-hook injection strategy because the agent forgets instructions mid-turn, mkdir-based locks to queue concurrent voice updates from multiple sessions, and /tmp sentinel files to manage async playback state and prevent infinite loops.

[1] Pocket-TTS: https://github.com/kyutai-labs/pocket-tts

[2] Claude-code voice plugin: https://pchalasani.github.io/claude-code-tools/plugins-detai...


I use the open source Handy [1] app with Parakeet V3 for STT when talking to coding agents and I’ve yet to see anything that beats this setup in terms of speed/accuracy. I get near instant transcription, and the slight accuracy drop is immaterial when talking to AIs that can “read between the lines”.

I tried incorporating this Voxtral C implementation into Handy but got very slow transcriptions on my M1 Max MacBook 64GB.

[1] https://github.com/cjpais/Handy

I’ll have to try the other implementations mentioned here.


Handy is great but I wish the STT was realtime instead of batch

There’s a tradeoff here. If you want streaming output, then you lose the opportunity to clean it up in post processing such as removing filler words or removing stutters, etc., or any other AI based cleanup.

The MacOS built-in dictation streams in real time and also does some cleanup, but it does awkward things, like the streaming text shows up at the bottom of the screen. Also I don’t think it’s as accurate as Parakeet V3, and there’s a start up lag of 1-2 secs after hitting the dictation shortcut, which kills it for me.


I feel like this is a solvable problem. If you emit an errant word that should be replaced, why not correspondingly emit backspaces to just rewrite the word?

I feel like this is the best of both worlds.

Perhaps a little janky with backspaces, but still technically feasible.


Have you tried Hex?

https://github.com/kitlangton/Hex

Faster than handy and uses way less memory.


Indeed it's extremely fast, now my go-to for STT on MacOS. I made a PR to allow single-tap toggle hotkey instead of double-tap. Unlike Handy which aims to be multi-platform, Hex is MacOS-native and leverages the CoreML + Apple Neural Engine for far speedier transcription.

Nice, will try, thanks!

Same here. I haven’t found an ASR/STT/transcription setup that beats Parakeet V3 on the speed/accuracy tradeoff spectrum: transcription is extremely fast (near instant for a couple sentences, 1-3 seconds for long ramblings), and the slight accuracy drop relative to heavier/slower models is immaterial for the use case of talking to AIs that can “read between the lines” (terminal coding agents etc).

I use Parakeet V3 in the excellent Handy [1] open source app. I tried incorporating the C-language implementation mentioned by others, into Handy, but it was significantly slower. Speed is absolutely critical for good UX in STT.

[1] https://github.com/cjpais/Handy


Can you use handy exclusive via the cli if you have a file to feed it?

Not sure about that

Not currently

This sounds very promising. Using multiple CC instances (or mix of CLI-agents) across tmux panes has always been a workflow of mine, where agents can use the tmux-cli [1] skill/tool to delegate/collaborate with others, or review/debug/validate each others work.

This new orchestration feature makes it much more useful since they share a common task list and the main agent coordinates across them.

[1] https://github.com/pchalasani/claude-code-tools?tab=readme-o...


Yeah, I've been using your tools for a while. They've been nice.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: