Skip to main content

API Best Practices

This guide covers best practices for using the Flow API effectively, handling errors, implementing retry logic, and ensuring idempotency.

Error Handling

Understanding Error Responses

All API errors follow a consistent format:

{
"error": "Error type",
"message": "Human-readable error message",
"details": {
"field": "Additional context"
}
}

HTTP Status Codes

StatusMeaningAction
200SuccessProcess response normally
201CreatedResource created successfully
400Bad RequestCheck request format and parameters
401UnauthorizedCheck API key validity
403ForbiddenCheck API key permissions
404Not FoundResource doesn't exist
429Rate LimitedImplement exponential backoff (see below)
500Server ErrorRetry with exponential backoff
503Service UnavailableRetry after delay

Handling Rate Limits (429)

When you receive a 429 Too Many Requests response:

  1. Read the Retry-After header - This tells you how long to wait
  2. Implement exponential backoff - Wait before retrying
  3. Respect the header - Don't retry before the specified time

Example implementation:

async function makeRequestWithRetry(url: string, options: RequestInit, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options);

if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
const waitTime = Math.min(retryAfter, Math.pow(2, attempt) * 1000);

if (attempt < maxRetries) {
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
}

return response;
}
}

Handling Server Errors (5xx)

For 500, 502, 503, and 504 errors:

  1. Retry with exponential backoff - Start with 1 second, double each retry
  2. Limit retries - Don't retry indefinitely (max 3-5 attempts)
  3. Log errors - Track failures for monitoring

Example:

async function makeRequestWithBackoff(url: string, options: RequestInit) {
const maxRetries = 3;

for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url, options);

if (response.ok || (response.status >= 400 && response.status < 500)) {
return response; // Success or client error (don't retry)
}

// Server error - retry
if (response.status >= 500 && attempt < maxRetries) {
const waitTime = Math.pow(2, attempt) * 1000; // 1s, 2s, 4s
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}

return response;
} catch (error) {
if (attempt < maxRetries) {
const waitTime = Math.pow(2, attempt) * 1000;
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
throw error;
}
}
}

Idempotency

Why Idempotency Matters

Idempotency ensures that making the same request multiple times has the same effect as making it once. This is crucial for:

  • Retry safety - Safe to retry failed requests
  • Network reliability - Handles duplicate requests from network issues
  • Race conditions - Prevents duplicate resource creation

Implementing Idempotency Keys

For POST requests that create resources, use idempotency keys:

// Generate a unique idempotency key (UUID recommended)
const idempotencyKey = crypto.randomUUID();

// Include in request headers
const response = await fetch('https://api.flowsocial.app/v1/posts', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json',
'Idempotency-Key': idempotencyKey, // Include this header
},
body: JSON.stringify({
channelId: 'channel_123',
content: 'Hello, world!',
}),
});

Important:

  • Use the same idempotency key for retries of the same request
  • Generate a new key for each unique request
  • Store idempotency keys for at least 24 hours (for duplicate detection)

Idempotent Operations

The following operations are idempotent by design:

  • GET requests - Safe to retry
  • DELETE requests - Safe to retry (returns 404 if already deleted)
  • PUT requests - Safe to retry (replaces resource)

Non-idempotent operations (use idempotency keys):

  • POST requests - Create new resources
  • PATCH requests - Partial updates

Request Timeouts

Always set reasonable timeouts for API requests:

const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 30000); // 30 seconds

try {
const response = await fetch(url, {
...options,
signal: controller.signal,
});
clearTimeout(timeoutId);
return response;
} catch (error) {
clearTimeout(timeoutId);
if (error.name === 'AbortError') {
throw new Error('Request timeout');
}
throw error;
}

Recommended timeouts:

  • Standard requests: 30 seconds
  • Bulk operations: 60 seconds
  • File uploads: 120 seconds

Pagination

When listing resources, use pagination to avoid loading too much data:

async function listAllPosts(apiKey: string) {
const allPosts = [];
let offset = 0;
const limit = 100;

while (true) {
const response = await fetch(
`https://api.flowsocial.app/v1/posts?limit=${limit}&offset=${offset}`,
{
headers: { 'Authorization': `Bearer ${apiKey}` },
}
);

const data = await response.json();
allPosts.push(...data.data);

// Check if there are more pages
if (!data.pagination || !data.pagination.hasMore) {
break;
}

offset += limit;
}

return allPosts;
}

API Key Security

Best Practices

  1. Never commit API keys to version control

    • Use environment variables
    • Use secret management tools (AWS Secrets Manager, HashiCorp Vault, etc.)
  2. Rotate keys regularly

    • Rotate keys every 90 days
    • Create new keys before deleting old ones
  3. Use least privilege

    • Only grant necessary permissions
    • Create separate keys for different services
  4. Monitor key usage

    • Check lastUsedAt timestamp
    • Set up alerts for unusual activity
  5. Revoke compromised keys immediately

    • Delete keys if exposed
    • Monitor for unauthorized usage

Environment Variables Example

# .env file (never commit this)
FLOW_API_KEY=flow_sk_live_abc12345_xyz789...
// Use environment variable
const apiKey = process.env.FLOW_API_KEY;
if (!apiKey) {
throw new Error('FLOW_API_KEY environment variable is required');
}

Rate Limit Best Practices

  1. Monitor rate limit headers

    const limit = response.headers.get('X-RateLimit-Limit');
    const remaining = response.headers.get('X-RateLimit-Remaining');
    const reset = response.headers.get('X-RateLimit-Reset');
  2. Implement request queuing

    • Queue requests when approaching limits
    • Process queue after rate limit window resets
  3. Use bulk operations

    • Combine multiple operations into single requests
    • Reduces API calls and rate limit usage
  4. Cache responses

    • Cache GET requests when appropriate
    • Reduces unnecessary API calls

Webhook Best Practices

See Webhooks Guide for detailed webhook best practices, including:

  • Signature verification
  • Idempotency handling
  • Retry logic
  • Security considerations

SDK Usage

TypeScript SDK

The Flow TypeScript SDK handles many best practices automatically:

import { Flow } from '@flowdev/sdk';

const flow = new Flow('flow_sk_live_...', {
timeout: 30000, // 30 seconds
maxRetries: 3, // Automatic retry with exponential backoff
});

// SDK automatically handles:
// - Retry logic for 429 and 5xx errors
// - Request timeouts
// - Error parsing
// - Rate limit headers

Python SDK

The Flow Python SDK also handles retries automatically:

from flow_sdk import Flow

flow = Flow(
api_key="flow_sk_live_...",
timeout=30,
max_retries=3 # Automatic retry with exponential backoff
)

# SDK automatically handles retries and error parsing

Monitoring and Logging

What to Log

  1. Request metadata

    • Endpoint, method, timestamp
    • Request ID (from X-Request-ID header)
  2. Response metadata

    • Status code, response time
    • Rate limit headers
  3. Errors

    • Error type, message, status code
    • Request context (without sensitive data)

Example Logging

async function logRequest(url: string, options: RequestInit) {
const startTime = Date.now();
const requestId = crypto.randomUUID();

try {
const response = await fetch(url, {
...options,
headers: {
...options.headers,
'X-Request-ID': requestId,
},
});

const duration = Date.now() - startTime;

console.log({
requestId,
method: options.method,
url,
status: response.status,
duration,
rateLimitRemaining: response.headers.get('X-RateLimit-Remaining'),
});

return response;
} catch (error) {
const duration = Date.now() - startTime;
console.error({
requestId,
method: options.method,
url,
error: error.message,
duration,
});
throw error;
}
}

Testing

Test Environment

Use the test/staging environment for development:

const flow = new Flow('flow_sk_test_...', {
baseURL: 'https://api-staging.flowsocial.app', // Test environment
});

Mocking for Tests

Mock API responses in your tests:

// Mock fetch for testing
global.fetch = jest.fn(() =>
Promise.resolve({
ok: true,
json: async () => ({ id: 'post_123', content: 'Test post' }),
})
);

Batch Operations

Batch endpoints reduce API calls, improve throughput, and help you stay within rate limits.

When to use batch endpoints

Use batch operations when you need to create, update, or delete many posts at once:

  • POST /v1/posts/batch
  • PATCH /v1/posts/batch
  • DELETE /v1/posts/batch

Example: Batch create posts (curl)

curl -X POST https://api.flowsocial.app/v1/posts/batch \
-H "Authorization: Bearer flow_sk_live_..." \
-H "Content-Type: application/json" \
-d '{
"posts": [
{ "channelId": "channel_123", "content": "Post 1" },
{ "channelId": "channel_123", "content": "Post 2" }
]
}'

Example: Batch create posts (TypeScript SDK)

const result = await flow.posts.createBatch({
posts: [
{ channelId: 'channel_123', content: 'Post 1' },
{ channelId: 'channel_123', content: 'Post 2' },
],
});

console.log('Created:', result.created.length);
console.log('Failed:', result.failed.length);

Performance Optimization

  1. Batch operations - Use batch endpoints when available
  2. Parallel requests - Make independent requests in parallel
  3. Caching - Cache GET requests with appropriate TTL
  4. Connection pooling - Reuse HTTP connections
  5. Compression - Enable gzip compression (automatic with SDKs)

Support