Integrate the first generation of reasoning model API for DeepSeek with the R1 (671B) model! Power your LLM tasks with the most cost-efficient inference framework and infrastructure from PiAPI!
About our DeepSeek R1 API!
Pricing
Competitive Unit Pricing Based on Innovative Inference Infrastructure!
(1) The DeepSeek-chat model points to the DeepSeek-V3 model.The DeepSeek-Reasoner model points to the new DeepSeek-R1 model.
(2) CoT (Chain of Thought) is the reasoning content that DeepSeek-Reasoner provides before outputting the final answer.
(3) If the max_tokens is not specified, then the default maximum output length is 4K. Please adjust max_tokens to support longer outputs.
(4) The output token count of DeepSeek-Reasoner includes all tokens from CoT and the final answer, and they are priced equally.
(5) For detailed advanced API doc please refer to https://api-docs.deepseek.com/guides/reasoning_model.
Our DeepSeek API is provided on PiAPI's custom AI inference framework and infrastructure, which allows developers to integrate the advanced, cost-effective LLM capability Chat and Reasoning from R1 and V3 into their own apps or platforms!
The DeepSeek API is created for developers who want to incorporate state of the art language and reasoning capabilities into their generative AI applications. This feature is ideal for any AI powered coding assistants, literature review, documentation summary, translation applications, marketing and advertising related applications.
After registering for an account on PiAPI, you will get some free credits to try the API. Using your own API-KEY you can start making HTTPs calls to the API!
You can call our API using HTTPS Post and Get methods from within your application. A wide range of programming languages that support HTTP methods (ex. Python, JavaScript, Ruby, Java, etc.) can be used to make the call!
We will queue your concurrent jobs if the number of your concurrent jobs exceeds a certain threshold. In terms of total number of requests, you can make as many requests as your credit amount allows.
Our API returns error codes and messages in the HTTP response to help identify the issue. Please refer to our documentation for more details.
Yes, absolutely! We provide custom solutions for clients with specialized requirements (ex. low latency, higher concurrency, fine-tuned DeepSeek models, etc), and we do provide cost-effective and performance-enhanced solutions for these LLM usecases!
The DeepSeek Models have a custom open-source license and it is permissible for commercial use for any lawful purpose. Developers do not need to register or apply with DeepSeek before using the open-source models. Developer can also develop derivative models and product applications based on the Models.
We offer the API through a pay-as-you-use system, you can purchase credits on our Workspace and monitor the remaining credits. The per-use cost of the API is reflected on upper portion this page. Please note that the credits purchased do expire in 180 days after purchase.
We have integrated Stripe in our payment system, which will allow payments to be made from most major credit card providers.
No, we do not offer refunds. But when you first sign up for an account on PiAPI's Workspace, you are given free credits to try our service before making payments!
Please email us at contact@piapi.ai - we'd love to listen to your feedback and explore potential collaborations!