I grew tired of endless YouTube videos, X articles or web articles.
So this app lets you open any link and you instantly get an AI summary + brief about the content.
(It's free up to 20 articles because there are real costs: I use Gemini to summarize the pages you open)
AI voices run locally on your iPhone/iPad (web extension version coming soon).
I built a voice AI stack and background noise can be really helpful to a restaurant AI for example. Italian background music or cafe background is part of the brand. It’s not meant to make the caller believe this is not a bot but only to make the AI call on brand.
You can call it what ever you like, but to me this is deceptive.
Where is the difference between this and Indian support staff pretending to be in your vicinity by telling you about the local weather? Your version is arguably even worse because it can plausibly fool people more competently.
It doesn't have to be. You can configure your bot to great the user.
E.g. "Aleksandra is not available at the moment, but I'm her AI assistant to help you book a table. How may I help you?"
So you're telling the caller that it is an AI, and yet you can have a pleasant background audio experience.
Yes DO let you handle long lived websocket connections.
I think this is unique to Cloudflare. AWS or Google Cloud don't seem to offer these things (statefulness basically).
Same with TTS: some like Deepgram and ElevenLabs let you stream the LLM text (or chunks per sentence) over their websocket API, making your Voice AI bot really really low latency.
I developed a stack on Cloudflare workers where latency is super low and it is cheap to run at scale thanks to Cloudflare pricing.
Runs at around 50 cents per hour using AssemblyAI or Deepgram as the STT, Gemini Flash as LLM and InWorld.ai as the TTS (for me it’s on par with ElevenLabs and super fast)
I am not using speech to speech APIs like OpenAI, but it would be easy to swap the STT + LLM + TTS to using Realtime (or Gemini Live API for that matter).
OpenAI realtime voices are really bad though, so you can also configure your session to accept AUDIO and output TEXT, and then use any TTS provider (like ElevenLabs or InWord.ai, my favorite for cost) so generate the audio.
Thanks for the feedback! I'm working on it now. Will push to GitHub soon with a basic Hono + D1 + Stripe setup you can actually run. I'll share it here when it's ready.
In browser transcript beautification using a mix of small models (Bert, all-MiniLM-L6-v2 and T5) for restoring punctuation, finding chapter splits and generating the headers.
Unless you fetch directly from your browser. It works by getting the YouTube json including the captions track. And then you get the baseUrl to download the xml.
I wrote this webapp that uses this method: it calls Gemini in the background to polish the raw transcript and produce a much better version with punctuation and paragraphs.
(It's free up to 20 articles because there are real costs: I use Gemini to summarize the pages you open)
AI voices run locally on your iPhone/iPad (web extension version coming soon).
When you find something useful, you can share the overviews online (free hosting), e.g. https://voiceview.app/a/2J49UnwK
Hope this helps cut the noise and help folks save time.
Laurent
reply