Make Instruction: How to Use PiAPI to Build a Workflow on Make?
Intorduction#
Hey there! Recently, to help more content creators easily use PiAPI for their projects, Flux by PiAPI has officially launched on the Make platform!For content creators or anyone without coding background who find API concepts challenging, this documentation will use visual guides to help you overcome these challenges.🙋Here’s what we’ll cover step by step listed as follows:1.
Finding PiAPI's Flux Modules on Make Platform
2.
Calling APIs on the PiAPI Platform
3.
Understanding PiAPI's Endpoint Structure
4.
Using the Flux App (PiAPI on Make) and Integrating It into your Workflow
5.
Adjusting Module Parameters and Their Meanings: Flux Module Selection and Parameter Configuration Guide
6.
Building a Simple Workflow Using PiAPI on Make
💡 If you have a technical background, you can quickly grasp Make's functionality and start building workflows to experiment with creative ideas. 🙌
We would be glad to see you:Share your workflow output on social media with us
All in all, we would be pleasant to see you share your workflow output in any social media with us and join our discord to make a discussion!Log in to the Make platform, locate Scenarios in the left Dashboard, click Create Apps, and you will enter the main workflow builder interface.
Then search for "PiAPI/Flux" and you'll see the Flux API modules available for use.Now that you've learned how to find PiAPI's Flux Module on Make, let's explore PiAPI's endpoint structure and walk through the process of calling Flux APIs. This knowledge will help you:Better understand the architecture behind each module
Confidently configure parameters when building workflows
Maximize the potential of PiAPI's capabilities in Make
We'll demonstrate the API calling process step-by-step, giving you practical insights to apply directly in your workflow development.Understanding PiAPI's Endpoint Structure#
Before organizing your workflow automation, let's walk through the key concepts needed for successful API calls with PiAPI. You should familiarize yourself with:Core Concepts#
API Structure and its components
Purpose of each structural element
Endpoint Types and their functions
Parameter Definitions and requirements
Best Practices for executing flawless API calls
This foundation will ensure you can effectively work with PiAPI's endpoints when building your workflows.An endpoint in PiAPI consists of three core components:1. URL#
The service path that directs your request.Key Requirement: You must use the correct address to successfully reach PiAPI.
POST
- Used when you need to create a task.
GET
- Used when retrieving task response.
These methods (POST and GET) define your action type when making API calls. And there are some differences between the urls of different http methods.
If you want to make an api call to make an API call, then you should choose POST. In this way could you send your request to Flux API. If you want to fetch your response, you should choose GET and could just send URL and Header Params then you could get your response (especially when you need to check or use an output in your workflow) in a specific task you have initiated.
Usually conludes your X-API-Key.
3. Body Params#
Tips: Usually the input params are mainly based on the model and task-type. You could check the section Flux Module Selection and Parameter Configuration Guide to check the params that could be concluded when you choose different task-type based on your need.Now, let's explore API calls on PiAPI to deepen your understanding of Make Modules.How to Make an API Call in PiAPI#
In PiAPI, you have two methods to call Flux APIs (applicable to other APIs as well):1.
Using Playground: Fill in the required parameters for direct execution.
2.
Via Run API: Submit your API call, then retrieve the task results from the respective model's Task History section.

When using the Run API to call PiAPI platform APIs, you'll have a broader range of options. As mentioned earlier regarding API call structures, you can:1.
Refer to the parameter specifications below for task_type selection
2.
Configure input parameters according to your requirements
Through understanding the constructure of every API reuqest, you could mainly create an API call on your own. Isn't that diffucult, right? 💪
Now you've understood how to create an API call on PiAPI, next let's learn how to make an API call in Make.How to make an API call in Make?#
According to http methods which depends on whether you want to create a task (POST) or get your task response (GET).Set up a POST Module#
1.
Connect your PiAPI accountMake sure you're logged in first.
2.
Create a JSON module (this is where you type your settings). Usually a JSON Module contains the structure of an endpoint's body including model, task-type and input. In this step, you need to construct your data structure based on the endpoint on your own.
Set up a GET request#
1.
Connect your PiAPI accountEnsure you're properly logged in.
2.
Choose "Make an API Call" from available options
3.
URL: Enter /v1/task/
followed by your specific task_id
(Example: /v1/task/12345
)
4.
Execute to retrieve your task results
Flux Node Selection and Parameter Configuration Guide#
What is LoRA & Accessing LoRA Options#
Simply put, LoRA is a style filter for AI art. To choose a LoRA, check the available styles in PiAPI's Flux Documentation.
Check this page to get your ideal style of LoRA.What is ControlNet & Accessing ControlNet Options#
ControlNet is a neural network structure designed to precisely control AI image generation by incorporating additional input conditions (e.g., edge maps, depth maps, human poses). It acts as a "steering wheel" for models like Stable Diffusion, ensuring generated images strictly adhere to structural constraints while following text prompts.Congratulations! Now that you understand both LoRA and ControlNet and know where to get the type list, you’re ready to explore beyond the PiAPI platform.
On Make, you can:Select different task_type options to experiment with various LoRA and ControlNet combinations.
Customize these features based on your specific needs
Understanding this structure will empower you to generate higher-quality images!Next, I’ll walk you through each module’s parameters. You’ll learn to:Configure parameters effectively
Make optimized Flux API calls
Module Types and Param Configuration Setting#
Based on these instructions, you can understand the purpose of each module's parameters and adjust them as needed until they meet your requirements.Extend an Image#
Expands the boundaries of an image. The user uploads an image, and the system generates new background or scenery based on the original content. Suitable for extending the image’s view into a broader scene.Parameter Name | Description | Constraints&Details |
---|
image | Image URL | URL format. |
prompt | Content to show in the extended part | string |
outpaint_left | Expand canvas pixels to the left | Total delta pixel size < 1024×1024 |
outpaint_right | Expand canvas pixels to the right | Total delta pixel size < 1024×1024 |
outpaint_top | Expand canvas pixels to the top | Total delta pixel size < 1024×1024 |
outpaint_bottom | Expand canvas pixels to the bottom | Total delta pixel size < 1024×1024 |
denoise | Controls noise/artifact removal strength in AI-generated images | Range: 0.1 to 1 |
guidance_scale | Adjusts adherence between generated content and text prompt | Range: 1.5 to 5.0 |
Generate an Image from an Image#
Generates a new image based on an input image. The user uploads an existing image, and the system modifies or transforms it to generate a new version.

Parameter Name | Description | Constraints&Details |
---|
image | Image URL | URL format. |
prompt | Content to display in the extended part | string Example: "A lovely puppy" |
negative_prompt | Elements to avoid during generation | string |
denoise | Proportion of input images (controls noise removal) | float (0.1-1.0) |
guidance_scale | Controls adherence between generated content and text prompt | float (1.5-5.0) |
Generate an Image from an Image with LoRA#
Generates a modified image based on the LoRA model.Parameter | Description | Constraints&Details |
---|
Image | Image URL | URL format. |
Prompt | Desired image description | string Example: "A lovely puppy" |
Lora_type | Art style selection | Choose from PiAPI's available LoRA options |
Lora_strength | LoRA style intensity | • Range: 0.1 (faint) to 1.0 (strong) |
Guidance_scale | Prompt adherence level | • Range: 1.5 (creative) to 5.0 (strict) |
Negative_prompt | Elements to exclude | Optional Example: "blurry, distorted hands" |
Width /Height | Image dimensions | In pixels (e.g., 1024×768 ) Max: 2048×2048 |
Generate an Image from Text#
Creates an image based on a text description.Parameter Name | Description | Constraints&Details |
---|
Batch_size | Batch image generation count | • Type: Integer • Range: 1 to 4 • Default: 1 |
image | Source image URL | • Type: String (URL format) |
prompt | Description of desired image content | string Example: "A lovely puppy" |
negative_prompt | Elements to avoid in generation | Optional |
guidance_scale | Controls prompt adherence strength | Range: 1.5 to 5.0 |
Generate Image with Text based on Lora#
Creates an image based on a text description using LoRA for fine-tuned generation.Parameter | Description | Details& Constraints |
---|
Prompt | Describe your desired image | string Example: "A lovely puppy" |
Lora_type | Select an art style | Available PiAPI LoRA options |
Lora_strength | LoRA style effect intensity | • Default: 1.0 (maximum effect) • Range: 0.1 (faint) - 1.0 (strong) |
Guidance_scale | AI prompt adherence level | Range: 1.5 (random) - 5.0 (strict) |
Negative_prompt | Elements to exclude | Optional Example: "blurry, distorted hands" |
Width/Height | Output image dimensions | In pixels (e.g., 1024×768 ) |
Generate an Image with ControlNet and LoRA#
Generates an image using ControlNet and LoRA.Parameter | Description | Constraints&Details |
---|
Image | Source image URL | (URL format) |
Prompt | Desired image description | string Example: "A lovely puppy" |
control_type | ControlNet type | • depth (default) • soft_edge • canny • openpose |
control_strength | ControlNet effect intensity | Range: 0.0 - 5.0 |
Lora_type | Art style selection | Select from PiAPI's LoRA options |
Lora_strength | LoRA style intensity | Range: 0.1 (faint) - 1.0 (strong) |
denoise | Input image proportion | Range: 0.1 - 1.0 |
Guidance_scale | Prompt adherence level | Range: 0.0 (random) - 5.0 (strict) |
Generate a Variation of an Image#
Creates a new variation of an image based on the original input.Parameter | Description | Constraints&Details |
---|
Batch_size | Batch image generation count | Default: 1 • Range: 1 -4 |
image | Source image URL | (valid URL) |
prompt | Content for extended image part | string Example: "A lovely puppy" |
width | Image length in pixels | Recommended: ≤2048 |
height | Image width in pixels | Recommended: ≤2048 |
denoise | Noise removal strength | Range: 0.1 -1.0 |
guidance_scale | Prompt adherence strength | Range: 0.0 -5.0 |
Remove the Background from an Image#
Removes the background from an image, leaving a transparent or solid-color background.

Parameter | Type | Description | Example |
---|
image | string | Source image URL | https://example.com/image.jpg |
Restore an Image#
Restores a damaged image. The user uploads an image with some parts missing or damaged, and the system fills in these gaps based on the user's description.Parameter | Description | Constraints&Details |
---|
Batch_size | Batch image generation count | Default: 1 • Range: 1 -4 |
image | Source image URL | valid URL format |
prompt | Content for extended image part | string Example: "A lovely puppy" |
denoise | Noise/artifact removal strength | Range: 0.1 -1.0 |
guidance_scale | Prompt adherence strength | Default: 2.5 • Range: 0.0 -5.0 |
service_mode | Payment method selection | • ppu (pay-per-use) • byoa (bring-your-own-account) |
How to Build a Simple Workflow Using PiAPI on Make#
In this section, I will show you how to make a simple workflow with google sheets and Flux API.
First, input your parameters (such as prompt and lora_strength) into a Google Sheet based on your selected module type. Next, choose the Google Sheet trigger node . In this case, I've selected "Watch New Rows," which means the workflow will only execute when new row data is added. You may alternatively select other Google Sheet trigger nodes to suit different needs. The "Limit" setting determines how many rows are fetched per execution - I've configured it to retrieve only one row at a time.
Add a Flux node and input all required parameters.
The final step involves configuring a node to capture your output results—you can either check the small bubble notification in the upper-right corner after each workflow execution, or implement a more structured approach. For this example, I’ve selected "Update a Row" to automatically store the target image’s URL at the end of the input row upon workflow completion. This ensures seamless traceability by appending results directly to your Google Sheet.
Click Run Once
And you could check the url feedback.
And finally, you could fetch your output here. The workflow has been constructed successfully.Join Our Discord#
Join our Discord community to showcase your work or get troubleshooting help.
Our contact details are available on the homepage.

We are looking forward to friendly and constructive discussions.
Plus, we would be so glad to see your creative workflows🙌!Modified at 2025-04-02 09:24:58