Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good catch: the calculators here are bizarre. For GPT-4o, a 512x512 image uses 170 tile tokens. For GPT-4o mini, a 512x512 image uses 5,667 tile tokens. How does that even work in the context of a ViT? The patches and its image encoder should be the same size/output.

Since the base token counts increase proportionally (which makes even less sense) I have a hunch there's a JavaScript bug instead.



Confirmed that mini uses ~30x more tokens than base gpt-4o using same image/same prompt: { completionTokens: 46, promptTokens: 14207, totalTokens: 14253 } vs. { completionTokens: 82, promptTokens: 465, totalTokens: 547 }.


Huh. I am so confused.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: