Rate limits

Rate limiting is an important aspect of working with APIs that helps maintain system stability and ensure fair usage across all users. This guide explains how rate limiting works at Unified.to and how it affects your API usage.

Background

Historically, rate limiting has been a necessary part of API design to:

  • Prevent server overload from too many simultaneous requests
  • Ensure fair resource distribution among all users
  • Protect APIs from abuse or unintended high-volume usage

In Unified.to's case, we act as a real-time intermediary between your application and various API platforms. While Unified.to has its own set of rate limits, we also need to respect and enforce the limits set by each API platform to maintain stable integrations.

How are rate limits determined at Unified.to?

Rate limits at Unified.to are determined by two factors:

  1. SaaS platform limits: Each underlying platform (like HubSpot, Salesforce, etc.) has their own rate limits that determine how many API calls you can make within a given time period. These are the primary limiting factor as we must respect these limitations to maintain stable integrations.
  2. Unified.to limits: We also implement our own rate limits to ensure fair usage across all users. However, these are generally more generous than the platform-specific limits.

You can find the specific rate limits for each integration under the Feature Support tab on https://app.unified.to/. When you hit a rate limit, you'll receive a 429 (Too Many Requests) response.

Handle rate limits yourself

When making direct API calls, you'll need to implement your own rate limit handling strategy. The most common approach is to use a backoff and retry mechanism:

Backoff and retry strategy

When you receive a rate limit error (HTTP 429), your application should:

  1. Temporarily pause making requests
  2. Wait for a period of time
  3. Retry the request

The waiting period typically follows an exponential backoff pattern, where each subsequent retry waits longer than the previous one. For example:

  • First retry: Wait 1 second
  • Second retry: Wait 2 seconds
  • Third retry: Wait 4 seconds
  • Fourth retry: Wait 8 seconds

This exponential increase helps prevent overwhelming the API while still maintaining functionality.

Best practices for handling rate limits

  • Add randomness (jitter) to your retry delays to prevent multiple clients from retrying simultaneously
  • Set a maximum number of retry attempts
  • Log rate limit occurrences to help identify patterns and adjust your strategy
  • Consider implementing a request queue to manage high-volume operations

Use webhooks to avoid worrying about rate limits

While you can implement your own rate limit handling, we strongly recommend using our webhooks instead. Here's why:

  • Easier and faster integration build time: Unified.to manages all rate limiting, backoff, and retry logic for you. You can also get all existing data via an initial sync, too
  • Efficient resource usage: Instead of polling for changes, you receive updates only when they occur
  • Reliable delivery: We'll keep trying to deliver webhook events even if your endpoint is temporarily unavailable
  • Automatic scaling: Our webhook system automatically adjusts to rate limits across different providers

You can read more about our webhooks retry mechanism here.

Are we missing anything? Let us know
Was this page helpful?