Skip to main content

Overview

Understanding rate limits and following best practices ensures reliable API usage and prevents service disruptions.

Rate Limits

Current Limits

LimitValueReset
Requests per minute100Per minute
Batch size60 recordsPer request
Concurrent connections10Per API key

Rate Limit Headers

Responses include rate limit information:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 85
X-RateLimit-Reset: 1640995200

Handling Rate Limits

When limit is reached (HTTP 429):
async function makeRequestWithRetry(query, variables, retries = 3) {
  try {
    return await client.request(query, variables);
  } catch (error) {
    if (error.response?.status === 429 && retries > 0) {
      const delay = Math.pow(2, 4 - retries) * 1000; // Exponential backoff
      await new Promise(resolve => setTimeout(resolve, delay));
      return makeRequestWithRetry(query, variables, retries - 1);
    }
    throw error;
  }
}

Best Practices

1. Use Batching

Batch operations reduce API calls:
// ❌ Bad: 60 individual requests
for (const record of records) {
  await client.request(CREATE_PERSON, { data: record });
}

// ✅ Good: 1 batch request
await client.request(CREATE_PEOPLE, { data: records });

2. Add Delays

Add delays between requests:
const DELAY_MS = 100; // Minimum 100ms between requests

for (const record of records) {
  await client.request(UPDATE_PERSON, { data: record });
  await new Promise(resolve => setTimeout(resolve, DELAY_MS));
}

3. Use Pagination

Always paginate large datasets:
async function fetchAllRecords() {
  const records = [];
  let hasNextPage = true;
  let after = null;

  while (hasNextPage) {
    const data = await client.request(QUERY, { first: 60, after });
    records.push(...data.edges);
    hasNextPage = data.pageInfo.hasNextPage;
    after = data.pageInfo.endCursor;
    
    await new Promise(resolve => setTimeout(resolve, 100));
  }

  return records;
}

4. Cache Results

Cache data when possible:
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

async function getPerson(id) {
  const cached = cache.get(id);
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }

  const data = await client.request(GET_PERSON, { id });
  cache.set(id, { data, timestamp: Date.now() });
  return data;
}

5. Optimize Queries

Request only needed fields:
// ❌ Bad: Fetches all fields
query {
  people {
    edges {
      node {
        id
        email
        firstName
        lastName
        company {
          id
          name
          industry
          # ... many more fields
        }
      }
    }
  }
}

// ✅ Good: Fetches only needed fields
query {
  people {
    edges {
      node {
        id
        email
        firstName
      }
    }
  }
}

6. Handle Errors Gracefully

Always handle errors:
async function safeRequest(query, variables) {
  try {
    return await client.request(query, variables);
  } catch (error) {
    console.error('API Error:', error.message);
    return null;
  }
}

Monitoring Usage

Track your API usage:
// Log all requests
const originalRequest = client.request.bind(client);
client.request = async function(query, variables) {
  console.log(`API Request: ${new Date().toISOString()}`);
  return originalRequest(query, variables);
};

// Count requests per minute
let requestCount = 0;
setInterval(() => {
  console.log(`Requests in last minute: ${requestCount}`);
  requestCount = 0;
}, 60000);

Optimization Checklist

  • Use batch operations when possible
  • Add 100ms delay between requests
  • Implement pagination for large datasets
  • Cache frequently accessed data
  • Request only needed fields
  • Handle rate limit errors with backoff
  • Run exports during off-peak hours
  • Monitor API usage regularly