non-streaming
mode in the text completion API, the process may exceed this time limit. To prevent time-out errors and avoid wasting tokens, you can try one of the following workarounds:stream: true
in the request payload as shown in LLM API | Basic Completions. This allows you to receive partial results as soon as the connection with the server is established, effectively bypassing the time-out issue.proxy.piapi.ai