Python Async LLM API Orchestration bug fix
Upwork

Remoto
•2 horas atrás
•Nenhuma candidatura
Sobre
We're looking for an experienced developer to help debug and stabilize our current project. We're dealing with performance issues and bugs—like rate limits, session timeouts, and event loop congestion—when making parallel async requests to a large-scale Python LLM API. Your mission: turn our unstable async setup into a rock-solid AI request engine that runs reliably without interruption. Tech stack we're using: - Python - FastAPI - asyncio - Vector DBs - Parallel LLM calls We need someone who can dive into the code right away and move fast. If you've handled similar async challenges before, we'd love to talk.




