Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Instead of a single "one batch with maybe up to 24hr latency" the offer could be a series of tiered queues with different SLAs and costs

The API includes a `completion_window` parameter ("The time frame within which the batch should be processed. Currently only 24h is supported.").

https://platform.openai.com/docs/api-reference/batch/create



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: