Skip to main content

Rate Limits

Understand API rate limits and how to handle them effectively.

Overview

Rate limits control the number of API requests you can make within a specific time period. This ensures fair usage and maintains service quality for all users.

Rate Limit Tiers

Free Tier

  • 60 requests per hour for unauthenticated requests
  • 1,000 requests per hour for authenticated requests
  • 20 requests per second burst limit

Professional Tier

  • 5,000 requests per hour
  • 50 requests per second burst limit
  • Priority support

Enterprise Tier

  • 25,000 requests per hour
  • 100 requests per second burst limit
  • Dedicated support
  • Custom limits available

Rate Limit Headers

Every API response includes rate limit information in the headers:

X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 995
X-RateLimit-Reset: 1609459200
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current period
X-RateLimit-RemainingNumber of requests remaining in the current period
X-RateLimit-ResetUnix timestamp when the rate limit resets

Handling Rate Limits

Check Rate Limit Status

async function checkRateLimit(response) {
const limit = response.headers.get('X-RateLimit-Limit');
const remaining = response.headers.get('X-RateLimit-Remaining');
const reset = response.headers.get('X-RateLimit-Reset');

console.log(`Rate Limit: ${remaining}/${limit}`);
console.log(`Resets at: ${new Date(reset * 1000).toISOString()}`);

return {
limit: parseInt(limit),
remaining: parseInt(remaining),
reset: parseInt(reset)
};
}

Handle 429 Responses

When you exceed the rate limit, the API returns a 429 status code:

{
"success": false,
"message": "Too many requests - Rate limit exceeded"
}

The response includes a Retry-After header indicating when you can retry:

HTTP/1.1 429 Too Many Requests
Retry-After: 60

Implement Exponential Backoff

async function requestWithBackoff(url, options, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch(url, options);

if (response.status === 429) {
const retryAfter = parseInt(
response.headers.get('Retry-After') || '60'
);

console.log(`Rate limited. Retrying after ${retryAfter}s...`);

// Wait for the specified time
await new Promise(resolve =>
setTimeout(resolve, retryAfter * 1000)
);

continue; // Retry the request
}

if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}

return await response.json();

} catch (error) {
if (attempt === maxRetries - 1) {
throw error;
}

// Exponential backoff for other errors
const delay = Math.pow(2, attempt) * 1000;
console.log(`Attempt ${attempt + 1} failed. Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}

Best Practices

1. Monitor Your Usage

Keep track of your API usage to avoid hitting rate limits:

class RateLimitMonitor {
constructor() {
this.requests = [];
}

logRequest(response) {
const rateLimit = {
timestamp: Date.now(),
remaining: parseInt(response.headers.get('X-RateLimit-Remaining')),
limit: parseInt(response.headers.get('X-RateLimit-Limit')),
reset: parseInt(response.headers.get('X-RateLimit-Reset'))
};

this.requests.push(rateLimit);

// Alert if getting close to limit
if (rateLimit.remaining < rateLimit.limit * 0.1) {
console.warn('Warning: Approaching rate limit!', rateLimit);
}
}

getStats() {
if (this.requests.length === 0) return null;

const latest = this.requests[this.requests.length - 1];
return {
remaining: latest.remaining,
limit: latest.limit,
percentageUsed: ((latest.limit - latest.remaining) / latest.limit * 100).toFixed(2),
resetTime: new Date(latest.reset * 1000)
};
}
}

2. Cache Responses

Reduce API calls by caching responses:

class APICache {
constructor(ttl = 300000) { // 5 minutes default
this.cache = new Map();
this.ttl = ttl;
}

get(key) {
const item = this.cache.get(key);

if (!item) return null;

if (Date.now() > item.expiry) {
this.cache.delete(key);
return null;
}

return item.data;
}

set(key, data) {
this.cache.set(key, {
data,
expiry: Date.now() + this.ttl
});
}
}

// Usage
const cache = new APICache();

async function fetchWithCache(url, options) {
const cached = cache.get(url);
if (cached) {
console.log('Cache hit:', url);
return cached;
}

const response = await fetch(url, options);
const data = await response.json();

cache.set(url, data);
return data;
}

3. Batch Requests

Combine multiple operations into single requests when possible:

// Instead of multiple single requests:
// ❌ Bad
for (const id of routeIds) {
await fetchRoute(id); // Multiple API calls
}

// ✅ Good - Use batch endpoints when available
const routes = await fetchMultipleRoutes(routeIds); // Single API call

4. Implement Request Queuing

Queue requests to stay within rate limits:

class RequestQueue {
constructor(maxPerSecond = 10) {
this.queue = [];
this.processing = false;
this.maxPerSecond = maxPerSecond;
this.interval = 1000 / maxPerSecond;
}

async add(request) {
return new Promise((resolve, reject) => {
this.queue.push({ request, resolve, reject });
this.process();
});
}

async process() {
if (this.processing || this.queue.length === 0) return;

this.processing = true;

while (this.queue.length > 0) {
const { request, resolve, reject } = this.queue.shift();

try {
const result = await request();
resolve(result);
} catch (error) {
reject(error);
}

// Wait before processing next request
if (this.queue.length > 0) {
await new Promise(r => setTimeout(r, this.interval));
}
}

this.processing = false;
}
}

// Usage
const queue = new RequestQueue(10); // 10 requests per second

async function queuedFetch(url, options) {
return queue.add(() => fetch(url, options));
}

5. Use Webhooks

For real-time updates, use webhooks instead of polling:

// ❌ Bad - Polling
setInterval(async () => {
const status = await checkStatus(); // Wastes rate limit
}, 5000);

// ✅ Good - Use webhooks
app.post('/webhook/stripe', (req, res) => {
// Handle webhook event
const event = req.body;
// Process event...
});

Increasing Your Rate Limit

To increase your rate limits:

  1. Upgrade Your Plan: Higher tiers come with increased limits
  2. Contact Enterprise Sales: For custom rate limits beyond standard tiers
  3. Optimize Your Usage: Review and optimize your API usage patterns

Monitoring Tools

View Your Usage

Check your current API usage in your dashboard:

curl https://api.soshiaconnect.com/api/subscriptions/api-call-usage?period=days \
-H "Authorization: Bearer YOUR_TOKEN"

Usage Logs

View detailed API usage logs:

curl https://api.soshiaconnect.com/api/api-usage-logs \
-H "Authorization: Bearer YOUR_TOKEN"

FAQ

Q: Do rate limits apply to all endpoints equally? A: Yes, rate limits apply to all API endpoints collectively.

Q: What happens if I exceed the rate limit? A: You'll receive a 429 status code and must wait for the rate limit to reset.

Q: Can I get my rate limit increased temporarily? A: Contact support for temporary increases during expected high-traffic periods.

Q: Do failed requests count toward the rate limit? A: Yes, all requests (successful or failed) count toward your rate limit.

Q: Is there a way to check my rate limit without making an API call? A: Rate limit information is included in every API response header. You can also view usage in your dashboard.

Support

Need help with rate limits?

  • Email: support@soshiaconnect.com
  • Check your current usage in the dashboard
  • Review your API call patterns
  • Consider upgrading your plan for higher limits