Skip to main content

Rate Limits

The Flow API uses rate limiting to ensure fair usage and maintain service quality.

📊 Rate Limit Calculator

Calculate your rate limit usage and optimize your API calls.

Requests Per Second
1.67
Time for 100 Operations
1.0s
Recommended Batch Size
1

💡 Optimization Tip

Consider using batch operations for multiple items to reduce API calls and stay within rate limits.

Default Limits

  • General API: 100 requests per minute per API key
  • Post Creation: 30 requests per minute
  • Webhook Creation: 10 requests per minute
  • API Key Creation: 5 requests per minute

Rate Limit Headers

Every API response includes rate limit information:

X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1703123456
  • X-RateLimit-Limit: Maximum requests allowed in the current window
  • X-RateLimit-Remaining: Number of requests remaining in the current window
  • X-RateLimit-Reset: Unix timestamp when the rate limit resets

Rate Limit Exceeded

When you exceed the rate limit, you'll receive a 429 Too Many Requests response:

{
"error": {
"type": "rate_limit_error",
"message": "Rate limit exceeded",
"code": "RATE_LIMIT_EXCEEDED",
"retry_after": 45
}
}

The retry_after field indicates how many seconds to wait before retrying.

Handling Rate Limits

1. Check Headers Before Requests

const response = await fetch('https://api.flow.dev/v1/posts', {
headers: {
'Authorization': `Bearer ${apiKey}`,
},
});

const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
const reset = parseInt(response.headers.get('X-RateLimit-Reset') || '0');

if (remaining < 10) {
const waitTime = reset - Math.floor(Date.now() / 1000);
console.log(`Rate limit low. Wait ${waitTime} seconds.`);
}

2. Implement Exponential Backoff

async function makeRequestWithBackoff<T>(
fn: () => Promise<T>,
maxRetries = 5
): Promise<T> {
for (let i = 0; i < maxRetries; i++) {
try {
return await fn();
} catch (error) {
if (error instanceof FlowError && error.status === 429) {
const waitTime = error.retryAfter || Math.pow(2, i) * 1000;
console.log(`Rate limited. Waiting ${waitTime}ms...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
throw error;
}
}
throw new Error('Max retries exceeded');
}

3. Use Batch Endpoints

Instead of making multiple individual requests, use batch endpoints:

// ❌ Slow - makes 10 separate requests
for (const post of posts) {
await flow.posts.create(post);
}

// ✅ Fast - makes 1 batch request
await flow.posts.createBatch({ posts });

4. Cache Responses

Cache frequently accessed data:

let channelsCache: Channel[] | null = null;
let cacheExpiry = 0;

async function getChannels(): Promise<Channel[]> {
const now = Date.now();
if (channelsCache && now < cacheExpiry) {
return channelsCache;
}

channelsCache = await flow.channels.list();
cacheExpiry = now + (5 * 60 * 1000); // 5 minutes
return channelsCache;
}

5. Monitor Rate Limit Usage

class RateLimitMonitor {
private remaining: number = 100;
private reset: number = 0;

updateFromHeaders(headers: Headers) {
this.remaining = parseInt(headers.get('X-RateLimit-Remaining') || '0');
this.reset = parseInt(headers.get('X-RateLimit-Reset') || '0');
}

getRemaining(): number {
return this.remaining;
}

getSecondsUntilReset(): number {
return Math.max(0, this.reset - Math.floor(Date.now() / 1000));
}

shouldThrottle(): boolean {
return this.remaining < 10;
}
}

const monitor = new RateLimitMonitor();

// After each request
monitor.updateFromHeaders(response.headers);

if (monitor.shouldThrottle()) {
const waitTime = monitor.getSecondsUntilReset();
console.log(`Throttling requests. Reset in ${waitTime}s`);
}

Best Practices

1. Respect Rate Limits

  • Never attempt to bypass rate limits
  • Implement proper retry logic with backoff
  • Monitor your rate limit usage

2. Optimize Request Patterns

  • Use batch endpoints when possible
  • Cache responses when appropriate
  • Combine multiple operations into single requests

3. Handle Errors Gracefully

  • Always check for 429 status codes
  • Respect the retry_after value
  • Implement exponential backoff

4. Monitor Usage

  • Track rate limit headers
  • Log rate limit warnings
  • Alert when approaching limits

Rate Limit by Endpoint

EndpointLimitWindow
GET /v1/*1001 minute
POST /v1/posts301 minute
POST /v1/posts/batch101 minute
POST /v1/webhooks101 minute
POST /v1/api-keys51 minute
POST /v1/channels201 minute
POST /v1/media/upload501 minute

Increasing Limits

Rate limits are designed to accommodate most use cases. If you need higher limits:

  1. Optimize your usage - Use batch endpoints and caching
  2. Contact support - support@flow.dev
  3. Upgrade your plan - Higher-tier plans may include increased limits

Testing Rate Limits

You can test rate limit handling:

// Make rapid requests to test rate limiting
const promises = Array.from({ length: 150 }, () =>
flow.posts.list()
);

const results = await Promise.allSettled(promises);

const rateLimited = results.filter(
r => r.status === 'rejected' &&
r.reason instanceof FlowError &&
r.reason.status === 429
).length;

console.log(`Rate limited: ${rateLimited} requests`);

SDK Support

All official SDKs handle rate limits automatically:

  • TypeScript SDK: Automatic retry with exponential backoff
  • Python SDK: Automatic retry with exponential backoff
  • Go SDK: Automatic retry with exponential backoff

You can configure retry behavior:

const flow = new Flow(apiKey, {
maxRetries: 3,
retryDelay: 1000, // Initial delay in ms
});