Skip to main content

🚀 Introduction

The Text Generator API lets you run text generation with Gemini models. It supports three modes:
  • Sync mode (sync: true): The request blocks until generation completes and returns the result directly.
  • Async mode (default): The request returns immediately with a pending status. Poll or receive a webhook when the result is ready.
  • Streaming mode (SSE): Tokens are streamed back in real time via Server-Sent Events.

✅ Prerequisites

  • A Creatify account with API access
  • Your API credentials (see Authentication)

Supported Models

Model Name
gemini-2.5-flash
gemini-2.5-pro
gemini-3-flash-preview
gemini-3.1-pro-preview
gemini-3.1-flash-lite-preview

1) Create a Text Generation Task

Send model_name, messages, and optional parameters like system_instruction, config, and webhook_url. Endpoint: POST /sse/text_generator/ Body:
  • model_name (string, required) — Name of the Gemini model (e.g. gemini-2.5-flash)
  • messages (array, required) — List of message objects with role (user or model) and content
  • system_instruction (string, optional) — System instruction for the model
  • config (object, optional) — Generation config parameters (temperature, max_output_tokens, etc.)
  • webhook_url (string URL, optional) — Webhook URL for async status updates
  • sync (boolean, optional, default: false) — If true, the request blocks until generation completes and returns the result directly

Async Mode (default)

curl --request POST \
  --url https://api.creatify.ai/sse/text_generator/ \
  --header 'Content-Type: application/json' \
  --header 'X-API-ID: your-api-id' \
  --header 'X-API-KEY: your-api-key' \
  --data '{
    "model_name": "gemini-2.5-flash",
    "messages": [
        {"role": "user", "content": "Write a short tagline for an AI video tool."}
    ],
    "system_instruction": "You are a creative marketing copywriter.",
    "config": {
        "temperature": 0.8,
        "max_output_tokens": 256
    },
    "webhook_url": "https://webhook.site/your-webhook-id"
}'

Sync Mode

Set "sync": true to block until generation completes. The response will contain the final result directly — no polling or webhook needed.
curl --request POST \
  --url https://api.creatify.ai/sse/text_generator/ \
  --header 'Content-Type: application/json' \
  --header 'X-API-ID: your-api-id' \
  --header 'X-API-KEY: your-api-key' \
  --data '{
    "model_name": "gemini-2.5-flash",
    "messages": [
        {"role": "user", "content": "Write a short tagline for an AI video tool."}
    ],
    "system_instruction": "You are a creative marketing copywriter.",
    "config": {
        "temperature": 0.8,
        "max_output_tokens": 256
    },
    "sync": true
}'

Config Parameters

ParameterTypeDescription
temperaturefloatControls randomness. Range: 0.0 – 2.0. Lower = more deterministic.
max_output_tokensintMaximum number of tokens to generate.
top_pfloatNucleus sampling threshold. Range: 0.0 – 1.0.
top_kintTop-k sampling. Only sample from top k tokens.
presence_penaltyfloatPenalize tokens already present. Range: -2.0 – 2.0.
frequency_penaltyfloatPenalize tokens by frequency. Range: -2.0 – 2.0.
seedintRandom seed for deterministic generation.
stop_sequencesstring[]List of strings that stop generation when encountered.
response_mime_typestringtext/plain or application/json.
response_json_schemaobjectJSON Schema for structured output. Automatically sets response_mime_type to application/json.

Multimodal Input (Image / Video)

Messages support multimodal content — send images or videos alongside text by providing a list of content parts. Each part specifies a type (text, image, or video) with the corresponding data. Limits: Images up to 20 MB, videos up to 100 MB.
Multimodal requests — especially with video — can take significantly longer to process. We recommend using async mode (default) with polling or a webhook, rather than sync mode.
Supported MIME types:
  • Image: image/jpeg, image/png, image/gif, image/webp
  • Video: video/mp4, video/mpeg, video/mov, video/avi, video/webm, video/quicktime, and more
curl --request POST \
  --url https://api.creatify.ai/sse/text_generator/ \
  --header 'Content-Type: application/json' \
  --header 'X-API-ID: your-api-id' \
  --header 'X-API-KEY: your-api-key' \
  --data '{
    "model_name": "gemini-2.5-flash",
    "messages": [
        {
            "role": "user",
            "content": [
                {"type": "image", "data": "<base64-encoded-image>", "mime_type": "image/jpeg"},
                {"type": "text", "text": "Describe what you see in this image."}
            ]
        }
    ],
    "webhook_url": "https://webhook.site/your-webhook-id"
}'

Function Calling

Enable the model to call functions you define. Provide tools with function declarations and optionally configure calling behavior with tool_config.
curl --request POST \
  --url https://api.creatify.ai/sse/text_generator/ \
  --header 'Content-Type: application/json' \
  --header 'X-API-ID: your-api-id' \
  --header 'X-API-KEY: your-api-key' \
  --data '{
    "model_name": "gemini-2.5-flash",
    "messages": [
        {"role": "user", "content": "What is the weather in San Francisco?"}
    ],
    "tools": [
        {
            "function_declarations": [
                {
                    "name": "get_weather",
                    "description": "Get the current weather for a given location.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "location": {"type": "string", "description": "City name"}
                        },
                        "required": ["location"]
                    }
                }
            ]
        }
    ],
    "tool_config": {
        "function_calling_config": {
            "mode": "AUTO"
        }
    },
    "sync": true
}'
After receiving a function call, you can send the result back in a follow-up message:
curl --request POST \
  --url https://api.creatify.ai/sse/text_generator/ \
  --header 'Content-Type: application/json' \
  --header 'X-API-ID: your-api-id' \
  --header 'X-API-KEY: your-api-key' \
  --data '{
    "model_name": "gemini-2.5-flash",
    "messages": [
        {"role": "user", "content": "What is the weather in San Francisco?"},
        {"role": "model", "function_call": {"name": "get_weather", "args": {"location": "San Francisco"}}},
        {"role": "user", "function_response": {"name": "get_weather", "response": {"temperature": 62, "condition": "Foggy"}}}
    ],
    "tools": [
        {
            "function_declarations": [
                {
                    "name": "get_weather",
                    "description": "Get the current weather for a given location.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "location": {"type": "string", "description": "City name"}
                        },
                        "required": ["location"]
                    }
                }
            ]
        }
    ],
    "sync": true
}'
Function calling modes:
ModeDescription
AUTOModel decides whether to call a function or respond with text (default)
ANYModel must call one of the provided functions
NONEModel will not call any functions

2) Streaming Mode (SSE)

For real-time token streaming, use the dedicated SSE endpoint. Tokens are delivered as they are generated — no polling needed. Endpoint: POST /sse/text_generator/stream/ The request body is the same as the standard endpoint (minus sync and webhook_url). The response is an SSE stream (text/event-stream) of Gemini-native response chunks. The final chunk includes a creatify object with the generation id and credits_used.
curl --request POST \
  --url https://api.creatify.ai/sse/text_generator/stream/ \
  --header 'Content-Type: application/json' \
  --header 'X-API-ID: your-api-id' \
  --header 'X-API-KEY: your-api-key' \
  --data '{
    "model_name": "gemini-2.5-flash",
    "messages": [
        {"role": "user", "content": "Write a short tagline for an AI video tool."}
    ],
    "config": {
        "temperature": 0.8,
        "max_output_tokens": 256
    }
}'
The streaming endpoint uses the same authentication (X-API-KEY + X-API-ID) and credit system as the standard endpoint.

3) Check Status (Poll) or Receive Webhook (Async mode only)

When using async mode (default), you can poll the task until status is done, or provide a webhook to be notified automatically. The generated text is in the response_text field.
If you used "sync": true, skip this step — the response already contains the completed result.
Endpoint: GET /sse/text_generator/{id}/

Poll

curl --request GET \
  --url https://api.creatify.ai/sse/text_generator/a1b2c3d4-5678-90ab-cdef-1234567890ab/ \
  --header 'X-API-ID: your-api-id' \
  --header 'X-API-KEY: your-api-key'

Webhook (Optional)

If you supplied a webhook_url when creating the task, we’ll POST a payload when it finishes. The generated text is in the response_text field.
{
    "id": "a1b2c3d4-5678-90ab-cdef-1234567890ab",
    "status": "done",
    "response_text": "Create stunning videos in seconds — powered by AI.",
    "response_function_calls": [],
    "failed_reason": "",
    "credits_used": 0.0005
}
You can verify the task any time with a GET to /sse/text_generator/{id}/.

Status Values

StatusDescription
pendingTask created, waiting to be processed
runningModel is generating text
doneGeneration complete — check response_text
failedGeneration failed — check failed_reason

📚 Endpoint Reference

ActionEndpoint
Create a text generation taskPOST /sse/text_generator/
Create a text generation task (streaming)POST /sse/text_generator/stream/
List text generation tasksGET /sse/text_generator/
Get task status / resultGET /sse/text_generator/{id}/

🎯 Summary

StepWhat you do
1Create a text generation task with model_name + messages (+ optional config, sync, webhook_url)
2aSync mode (sync: true): Result is returned directly in the response — done!
2bAsync mode (default): Poll /sse/text_generator/{id}/ or receive a webhook with the final result in response_text
2cStreaming mode: POST to /sse/text_generator/stream/ and consume the SSE stream in real time