AI Agents and Firewall Whitelisting: Static IPs for LLM-Powered Workflows

QuotaGuard Engineering
March 31, 2026
5 min read
Pattern

AI agents are calling more APIs than ever. An agent running on AWS Lambda picks up a task, queries a CRM, pulls data from a partner's API, writes to a database, and triggers a webhook. Each of those downstream services might require IP whitelisting.

The problem: agents running on serverless infrastructure don't have predictable outbound IPs. Lambda functions, Cloud Functions, Vercel serverless, Railway. None of them give you a stable IP address.

The Pattern

A typical AI agent workflow looks like this. An orchestrator (LangChain, CrewAI, AutoGen, a custom agent loop) runs on serverless compute. The agent decides it needs to call an external service. Maybe it's a customer's database. Maybe it's a vendor API. Maybe it's an internal tool behind a corporate firewall.

The service requires the agent's IP to be whitelisted. But the agent is running on Lambda. Its IP rotates every invocation. The firewall rejects the connection.

This is the same IP whitelisting problem that's existed for years with serverless compute. AI agents just make it more common because agents tend to integrate with more services than a typical single-purpose function.

Why Agents Make This Worse

A standard Lambda function usually calls one or two external APIs. You know what those are at build time. You can plan around them.

An AI agent might decide at runtime to call a service you didn't anticipate. Tool-using agents pick from a toolkit. If one of those tools hits a firewalled endpoint, the IP problem shows up as a runtime failure, not a build-time configuration issue.

Agents also tend to run longer workflows with multiple API calls in sequence. If any single call in the chain fails because of an IP rejection, the whole workflow fails. More integrations means more surface area for this problem.

The Fix

Route outbound traffic through a static IP proxy. Your agent's HTTP calls go through QuotaGuard. The downstream service sees a consistent IP. You whitelist those two IPs once. Every agent invocation uses them.

Python (LangChain / Custom Agent)

import os
import requests

proxy_url = os.environ.get('QUOTAGUARD_URL')

session = requests.Session()
session.proxies = {
    'http': proxy_url,
    'https': proxy_url
}

# Your agent's tool function
def call_partner_api(endpoint, params):
    response = session.get(endpoint, params=params)
    return response.json()

Node.js (Custom Agent / API Tools)

const { HttpsProxyAgent } = require('https-proxy-agent');

const agent = new HttpsProxyAgent(process.env.QUOTAGUARD_URL);

async function callPartnerAPI(url) {
  const response = await fetch(url, { agent });
  return response.json();
}

Set QUOTAGUARD_URL as an environment variable in your Lambda, Cloud Function, or container runtime. Every outbound request routes through two static IPs.

Selective Proxying

Not every API call needs a static IP. OpenAI's API doesn't care about your source IP. Anthropic's API doesn't either. Most public SaaS APIs authenticate with API keys, not IP whitelists.

The calls that need static IPs are typically partner APIs behind firewalls, customer databases with security groups, legacy SOAP services with IP-based access control, and SFTP servers.

Route only those calls through the proxy. Let everything else go directly. This keeps latency minimal and avoids unnecessary proxy bandwidth.

import os
import requests

proxy_url = os.environ.get('QUOTAGUARD_URL')

def call_api(url, needs_static_ip=False):
    proxies = {'http': proxy_url, 'https': proxy_url} if needs_static_ip else None
    return requests.get(url, proxies=proxies)

# Public API - direct
openai_response = call_api('https://api.openai.com/v1/chat/completions')

# Firewalled partner - through proxy
partner_response = call_api('https://api.partner.com/data', needs_static_ip=True)

Multi-Agent Systems

When you're running multiple agents that all need to hit the same firewalled services, a shared proxy setup keeps things simple. Every agent uses the same QUOTAGUARD_URL. The partner whitelists two IPs. It doesn't matter which agent is calling or which Lambda instance is executing. The outbound IP is always one of the two.

This also simplifies audit logging. All traffic to firewalled services flows through the proxy. QuotaGuard's dashboard shows where traffic went, when, and from which IP.

Platforms

QuotaGuard works with the platforms where AI agents commonly run: AWS Lambda, Google Cloud Functions, Azure Functions, Heroku, Render, Railway, Vercel, and container services like ECS and Cloud Run. Anywhere you can set an environment variable and make HTTP requests, you can use the proxy.

For agents that need TCP connections (database tools, SFTP tools), SOCKS5 or QGTunnel handles non-HTTP protocols.

Compliance Considerations

If your agent handles sensitive data (healthcare records, payment information, personal data), the proxy traffic should be encrypted end-to-end. QuotaGuard Shield at $29/month adds SSL passthrough. No TLS termination at the proxy layer. The data stays encrypted from your agent to the destination.

For geographic data residency requirements (EU data must stay in EU), the $899/month data regionality add-on ensures proxy traffic routes through region-specific infrastructure.

Getting Started

Sign up for QuotaGuard Static. Set the proxy URL as an environment variable in your agent's runtime. Route firewalled API calls through the proxy. Whitelist the two static IPs with your partners.

Test at https://ip.quotaguard.com to confirm your traffic is coming from the expected IPs.

QuotaGuard Static starts at $19/month.

QuotaGuard Static IP Blog

Practical notes on routing cloud and AI traffic through Static IPs.

Reliability Engineered for the Modern Cloud

For over a decade, QuotaGuard has provided reliable, high-performance static IP and proxy solutions for cloud environments like Heroku, Kubernetes, and AWS.

Get the fixed identity and security your application needs today.