QuotaGuard and Kubernetes Integration Guide

    QuotaGuard and Kubernetes Integration Guide

    QuotaGuard Static IPs allow your Kubernetes workloads to send outbound traffic through a load-balanced pair of static IP addresses. Once set up, you can use QuotaGuard’s IPs to connect to firewalled databases and APIs that require IP allowlisting.

    You do not need QuotaGuard for internal cluster traffic. Service-to-service communication within your Kubernetes cluster stays internal. QuotaGuard is for connecting to external services that require a known, static source IP address.

    Why Kubernetes Needs Static Egress IPs

    Kubernetes presents unique egress challenges that infrastructure-level solutions struggle to solve:

    Dynamic Pod IPs: Pods receive ephemeral IPs from your cluster’s CIDR range. When a pod is rescheduled, it gets a new IP. You cannot allowlist individual pod IPs on external firewalls.

    Node-level SNAT: By default, Kubernetes SNATs outbound pod traffic to node IPs. Nodes also scale up and down, creating large, unpredictable CIDR ranges that external services refuse to allowlist.

    Serverless Kubernetes: AWS Fargate for EKS and GKE Autopilot abstract away nodes entirely. You cannot assign static IPs at the infrastructure level because you do not control the underlying infrastructure.

    HPA Scaling: The Horizontal Pod Autoscaler can scale your deployment from 10 to 1,000 pods based on load. Infrastructure-level solutions that allocate IPs per pod cause IP exhaustion. Shared NAT gateways become expensive bottlenecks.

    Multi-tenant Clusters: Different namespaces may need different static IPs for compliance, billing separation, or partner requirements. Infrastructure-level solutions typically operate cluster-wide without namespace granularity.

    QuotaGuard solves these problems by operating at the pod level. Your application routes traffic through the proxy regardless of where the pod runs, how many replicas exist, or what CNI you use.

    Native Options (Complex and Expensive)

    Kubernetes offers several native approaches to static egress. Each has significant tradeoffs:

    SolutionProsCons
    Cloud NAT GatewaysManaged service, cluster-wideExpensive (hourly + per-GB fees), no pod/namespace granularity, can exceed compute costs
    AKS Static Egress GatewayMicrosoft-managed, pod annotation routingAzure-only, requires K8s 1.34+, dedicated node pools
    Antrea Egress SNATOpen source, IP pool supportRequires Antrea CNI, v1.2.0+
    Calico Egress GatewaysNamespace-level granularityCommercial license only, not in Calico Open Source
    nirmata/kube-static-egress-ipWorks across CNIsAlpha stage, self-managed, DaemonSet overhead
    Service Mesh (Istio)Flexible egress controlComplex setup, significant resource overhead

    Cloud NAT Gateways are the most common approach. AWS NAT Gateway costs $0.045/hour plus $0.045/GB processed. A cluster processing 1TB of egress monthly pays ~$77 for the gateway alone. This cost compounds across multiple availability zones and regions. And you still get no namespace-level granularity.

    QuotaGuard advantages:

    • Works on EKS, GKE, AKS, bare metal. Same configuration everywhere.
    • Operates at pod level. Works with Fargate for EKS and GKE Autopilot.
    • Namespace-level granularity. Different credentials per namespace for compliance and billing.
    • HPA-friendly. Scales without IP exhaustion. Connection pooling handled by QuotaGuard.
    • Predictable pricing. No per-GB egress fees beyond your QuotaGuard plan.
    • Audit logging. Distinct credentials per namespace for traffic attribution.

    Getting Started

    After creating a QuotaGuard account, you will be redirected to your dashboard where you can find your proxy credentials and static IP addresses.

    Choose the right proxy region: Match your QuotaGuard region to your cluster’s location for minimum latency.

    Cloud Provider RegionQuotaGuard Region
    us-east-1, us-east-2US-East
    us-west-1, us-west-2US-West
    ca-central-1Canada (Montreal)
    eu-west-1, eu-west-2EU-West (Ireland)
    eu-central-1EU-Central (Frankfurt)
    ap-southeast-1AP-Southeast (Singapore)
    ap-northeast-1AP-Northeast (Tokyo)
    ap-southeast-2Australia (Sydney)

    Your proxy URL will look like this:

    http://username:password@us-east-static-01.quotaguard.com:9293
    

    Finding Your Static IPs: Your two static IPs are displayed in the QuotaGuard dashboard. Both IPs are active simultaneously for high availability. Add both to any firewall allowlists on the target service side.

    Method 1: Environment Variable Injection

    The simplest approach. Store your proxy URL in a Kubernetes Secret and inject it as environment variables. Most languages and HTTP clients automatically respect HTTP_PROXY and HTTPS_PROXY environment variables.

    Step 1: Create a Secret

    Store your QuotaGuard credentials securely:

    kubectl create secret generic quotaguard-proxy \
      --from-literal=QUOTAGUARDSTATIC_URL="http://username:password@us-east-static-01.quotaguard.com:9293" \
      -n your-namespace
    

    Or use a YAML manifest for GitOps workflows:

    # quotaguard-secret.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: quotaguard-proxy
      namespace: your-namespace
    type: Opaque
    stringData:
      QUOTAGUARDSTATIC_URL: "http://username:password@us-east-static-01.quotaguard.com:9293"
    

    Apply it:

    kubectl apply -f quotaguard-secret.yaml
    

    Important: For production, use a secrets management solution like Sealed Secrets, External Secrets Operator, or your cloud provider’s secrets manager integration.

    Step 2: Create a NO_PROXY ConfigMap

    This is critical. Without NO_PROXY, internal cluster traffic routes through the proxy and breaks service-to-service communication.

    # quotaguard-config.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: quotaguard-config
      namespace: your-namespace
    data:
      NO_PROXY: "localhost,127.0.0.1,.svc.cluster.local,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
    

    Apply it:

    kubectl apply -f quotaguard-config.yaml
    

    Adjust the CIDRs to match your cluster:

    • Check your pod CIDR: kubectl cluster-info dump | grep -m 1 cluster-cidr
    • Check your service CIDR: kubectl cluster-info dump | grep -m 1 service-cluster-ip-range

    Common configurations:

    Cluster TypeTypical Pod CIDRTypical Service CIDR
    EKS10.0.0.0/16172.20.0.0/16
    GKE10.0.0.0/1410.4.0.0/14
    AKS10.244.0.0/1610.0.0.0/16
    k3s10.42.0.0/1610.43.0.0/16

    Step 3: Configure Your Deployment

    Reference both the Secret and ConfigMap in your deployment:

    # deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
      namespace: your-namespace
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - name: my-app
            image: my-app:latest
            env:
            # Inject proxy URL from Secret
            - name: QUOTAGUARDSTATIC_URL
              valueFrom:
                secretKeyRef:
                  name: quotaguard-proxy
                  key: QUOTAGUARDSTATIC_URL
            # Set standard proxy environment variables
            - name: HTTP_PROXY
              valueFrom:
                secretKeyRef:
                  name: quotaguard-proxy
                  key: QUOTAGUARDSTATIC_URL
            - name: HTTPS_PROXY
              valueFrom:
                secretKeyRef:
                  name: quotaguard-proxy
                  key: QUOTAGUARDSTATIC_URL
            # Exclude internal traffic from proxy
            - name: NO_PROXY
              valueFrom:
                configMapKeyRef:
                  name: quotaguard-config
                  key: NO_PROXY
    

    Languages That Respect Proxy Environment Variables

    Many languages and HTTP clients automatically use HTTP_PROXY and HTTPS_PROXY when set:

    LanguageHTTP ClientAuto-Proxy Support
    Pythonrequests, httpx, urllib3Yes
    Node.jsaxios (with config), node-fetch v3+Partial
    Gonet/httpYes
    RubyNet::HTTP, FaradayYes
    JavaOkHttp, Apache HttpClientConfigurable
    RustreqwestYes

    Node.js note: Native fetch and many npm packages do not automatically respect proxy environment variables. See the language-specific examples below.

    Python Example (Auto-Proxy)

    Python’s requests library respects environment variables automatically:

    import requests
    
    # No explicit proxy configuration needed when HTTP_PROXY/HTTPS_PROXY are set
    response = requests.get('https://api.example.com/data')
    print(response.json())
    

    Go Example (Auto-Proxy)

    Go’s net/http respects environment variables automatically:

    package main
    
    import (
        "fmt"
        "io"
        "net/http"
    )
    
    func main() {
        // No explicit proxy configuration needed when HTTP_PROXY/HTTPS_PROXY are set
        resp, err := http.Get("https://api.example.com/data")
        if err != nil {
            panic(err)
        }
        defer resp.Body.Close()
    
        body, _ := io.ReadAll(resp.Body)
        fmt.Println(string(body))
    }
    

    Ruby Example (Auto-Proxy)

    Ruby’s Net::HTTP respects environment variables automatically:

    require 'net/http'
    require 'uri'
    
    # No explicit proxy configuration needed when HTTP_PROXY/HTTPS_PROXY are set
    uri = URI('https://api.example.com/data')
    response = Net::HTTP.get(uri)
    puts response
    

    Node.js Example (Explicit Configuration Required)

    Node.js does not automatically respect proxy environment variables. You must configure the proxy explicitly:

    const axios = require('axios');
    const { HttpsProxyAgent } = require('https-proxy-agent');
    
    const proxyUrl = process.env.QUOTAGUARDSTATIC_URL;
    const agent = new HttpsProxyAgent(proxyUrl);
    
    axios.get('https://api.example.com/data', { httpsAgent: agent })
      .then(response => console.log(response.data))
      .catch(error => console.error(error));
    

    Install the proxy agent: npm install https-proxy-agent

    Using fetch with undici:

    import { ProxyAgent, fetch } from 'undici';
    
    const proxyUrl = process.env.QUOTAGUARDSTATIC_URL;
    const dispatcher = new ProxyAgent(proxyUrl);
    
    const response = await fetch('https://api.example.com/data', { dispatcher });
    const data = await response.json();
    console.log(data);
    

    Method 2: QGTunnel Sidecar

    For applications that do not respect proxy environment variables, or for database connections over non-HTTP protocols, use QGTunnel as a sidecar container.

    QGTunnel creates local port mappings that route traffic through QuotaGuard’s SOCKS5 proxy. Your application connects to localhost, and QGTunnel handles the proxying transparently.

    Step 1: Create Secrets for QGTunnel

    # qgtunnel-secret.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: quotaguard-qgtunnel
      namespace: your-namespace
    type: Opaque
    stringData:
      QUOTAGUARDSTATIC_URL: "http://username:password@us-east-static-01.quotaguard.com:9293"
    

    Step 2: Configure Tunnel in QuotaGuard Dashboard

    Open your QuotaGuard dashboard and navigate to Settings > Setup > Tunnel > Create Tunnel.

    Example PostgreSQL configuration:

    SettingValue
    Remote Destinationtcp://your-database.example.com:5432
    Local Port5432
    Transparentfalse
    Encryptedfalse

    Note: In Kubernetes, you typically want Transparent: false because you will configure your application to connect to localhost rather than the original hostname.

    Step 3: Download Configuration File

    In your QuotaGuard dashboard, go to Tunnel > Download configuration and save it as .qgtunnel. Create a ConfigMap from this file:

    kubectl create configmap qgtunnel-config \
      --from-file=.qgtunnel \
      -n your-namespace
    

    Step 4: Deploy with Sidecar

    # deployment-with-sidecar.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
      namespace: your-namespace
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          # Your application container
          - name: my-app
            image: my-app:latest
            env:
            # Point database connection to localhost (where QGTunnel listens)
            - name: DATABASE_HOST
              value: "localhost"
            - name: DATABASE_PORT
              value: "5432"
          
          # QGTunnel sidecar container
          - name: qgtunnel
            image: quotaguard/qgtunnel:latest
            env:
            - name: QUOTAGUARDSTATIC_URL
              valueFrom:
                secretKeyRef:
                  name: quotaguard-qgtunnel
                  key: QUOTAGUARDSTATIC_URL
            volumeMounts:
            - name: qgtunnel-config
              mountPath: /app/.qgtunnel
              subPath: .qgtunnel
            resources:
              requests:
                memory: "32Mi"
                cpu: "10m"
              limits:
                memory: "64Mi"
                cpu: "100m"
          
          volumes:
          - name: qgtunnel-config
            configMap:
              name: qgtunnel-config
    

    Step 5: Configure Your Application

    With QGTunnel running as a sidecar, your application connects to localhost on the configured port:

    Python (PostgreSQL):

    import psycopg2
    
    # Connect to localhost where QGTunnel listens
    conn = psycopg2.connect(
        host='localhost',  # QGTunnel sidecar
        port=5432,
        database='mydb',
        user='dbuser',
        password='dbpass'
    )
    

    Node.js (MongoDB):

    const { MongoClient } = require('mongodb');
    
    // Connect to localhost where QGTunnel listens
    const client = new MongoClient('mongodb://localhost:27017/mydb');
    await client.connect();
    

    No proxy-aware code required. Your application thinks it is connecting directly.

    Method 3: Helm Chart Integration

    For production deployments using Helm, template your proxy configuration for environment-specific credentials.

    values.yaml

    # values.yaml
    quotaguard:
      enabled: true
      secretName: quotaguard-proxy
      # Different URLs per environment
      proxyUrl: ""  # Set via --set or environment-specific values file
      noProxy: "localhost,127.0.0.1,.svc.cluster.local,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
    

    values-production.yaml

    # values-production.yaml
    quotaguard:
      enabled: true
      proxyUrl: "http://prod-user:prod-pass@us-east-static-01.quotaguard.com:9293"
    

    values-staging.yaml

    # values-staging.yaml
    quotaguard:
      enabled: true
      proxyUrl: "http://staging-user:staging-pass@us-east-static-01.quotaguard.com:9293"
    

    templates/secret.yaml

    {{- if .Values.quotaguard.enabled }}
    apiVersion: v1
    kind: Secret
    metadata:
      name: {{ .Values.quotaguard.secretName }}
    type: Opaque
    stringData:
      QUOTAGUARDSTATIC_URL: {{ .Values.quotaguard.proxyUrl | quote }}
    {{- end }}
    

    templates/configmap.yaml

    {{- if .Values.quotaguard.enabled }}
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: quotaguard-config
    data:
      NO_PROXY: {{ .Values.quotaguard.noProxy | quote }}
    {{- end }}
    

    templates/deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: {{ include "myapp.fullname" . }}
    spec:
      template:
        spec:
          containers:
          - name: {{ .Chart.Name }}
            image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
            env:
            {{- if .Values.quotaguard.enabled }}
            - name: QUOTAGUARDSTATIC_URL
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.quotaguard.secretName }}
                  key: QUOTAGUARDSTATIC_URL
            - name: HTTP_PROXY
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.quotaguard.secretName }}
                  key: QUOTAGUARDSTATIC_URL
            - name: HTTPS_PROXY
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.quotaguard.secretName }}
                  key: QUOTAGUARDSTATIC_URL
            - name: NO_PROXY
              valueFrom:
                configMapKeyRef:
                  name: quotaguard-config
                  key: NO_PROXY
            {{- end }}
    

    Deploying with Helm

    # Production
    helm upgrade --install my-app ./my-chart -f values-production.yaml
    
    # Staging
    helm upgrade --install my-app ./my-chart -f values-staging.yaml
    
    # Or pass the proxy URL directly
    helm upgrade --install my-app ./my-chart \
      --set quotaguard.proxyUrl="http://user:pass@us-east-static-01.quotaguard.com:9293"
    

    Multi-Tenant / Multi-Namespace Setup

    For clusters serving multiple teams or customers, use separate QuotaGuard credentials per namespace. This provides:

    • Traffic attribution: Know which namespace generated which traffic
    • Billing separation: Charge different cost centers appropriately
    • Security isolation: Compromised credentials in one namespace do not affect others
    • Compliance: Different IPs for different regulatory requirements

    Per-Namespace Secrets

    Create a QuotaGuard account (or sub-account) for each namespace:

    # Team A namespace
    kubectl create secret generic quotaguard-proxy \
      --from-literal=QUOTAGUARDSTATIC_URL="http://team-a-user:team-a-pass@us-east-static-01.quotaguard.com:9293" \
      -n team-a
    
    # Team B namespace
    kubectl create secret generic quotaguard-proxy \
      --from-literal=QUOTAGUARDSTATIC_URL="http://team-b-user:team-b-pass@us-east-static-01.quotaguard.com:9293" \
      -n team-b
    

    Each namespace gets its own static IP pair. Partners can allowlist team-specific IPs.

    Automatic Injection with Mutating Webhook

    For cluster-wide automatic injection, consider a mutating admission webhook that adds proxy environment variables to all pods in specific namespaces. Tools like Kyverno or Gatekeeper can implement this pattern.

    Example Kyverno policy:

    apiVersion: kyverno.io/v1
    kind: ClusterPolicy
    metadata:
      name: inject-quotaguard-proxy
    spec:
      rules:
      - name: inject-proxy-env
        match:
          resources:
            kinds:
            - Pod
            namespaces:
            - team-a
            - team-b
        mutate:
          patchStrategicMerge:
            spec:
              containers:
              - (name): "*"
                env:
                - name: HTTP_PROXY
                  valueFrom:
                    secretKeyRef:
                      name: quotaguard-proxy
                      key: QUOTAGUARDSTATIC_URL
                - name: HTTPS_PROXY
                  valueFrom:
                    secretKeyRef:
                      name: quotaguard-proxy
                      key: QUOTAGUARDSTATIC_URL
                - name: NO_PROXY
                  value: "localhost,127.0.0.1,.svc.cluster.local,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
    

    Database Connections (SOCKS5)

    For non-HTTP protocols like PostgreSQL, MySQL, or MongoDB, use QuotaGuard’s SOCKS5 proxy on port 1080.

    See Method 2 above. QGTunnel creates local port mappings, so your application connects to localhost without any proxy-aware code.

    Option B: Direct SOCKS5 in Application Code

    If you cannot use a sidecar, configure SOCKS5 directly in your application:

    Python (PostgreSQL with PySocks):

    import os
    import socks
    import socket
    import psycopg2
    
    # Parse SOCKS credentials from environment
    socks_host = os.environ.get('QUOTAGUARD_SOCKS_HOST', 'us-east-static-01.quotaguard.com')
    socks_port = int(os.environ.get('QUOTAGUARD_SOCKS_PORT', '1080'))
    socks_user = os.environ.get('QUOTAGUARD_SOCKS_USER')
    socks_pass = os.environ.get('QUOTAGUARD_SOCKS_PASS')
    
    # Configure SOCKS proxy
    socks.set_default_proxy(
        socks.SOCKS5,
        socks_host,
        socks_port,
        username=socks_user,
        password=socks_pass
    )
    socket.socket = socks.socksocket
    
    # Connect to database
    conn = psycopg2.connect(
        host='your-database.example.com',
        database='mydb',
        user='dbuser',
        password='dbpass'
    )
    

    Install dependencies: pip install PySocks psycopg2-binary

    Kubernetes Secret for SOCKS5:

    apiVersion: v1
    kind: Secret
    metadata:
      name: quotaguard-socks
      namespace: your-namespace
    type: Opaque
    stringData:
      QUOTAGUARD_SOCKS_HOST: "us-east-static-01.quotaguard.com"
      QUOTAGUARD_SOCKS_PORT: "1080"
      QUOTAGUARD_SOCKS_USER: "your-username"
      QUOTAGUARD_SOCKS_PASS: "your-password"
    

    Serverless Kubernetes (Fargate for EKS, GKE Autopilot)

    QuotaGuard works seamlessly with serverless Kubernetes offerings because it operates at the pod level, not the node level.

    AWS Fargate for EKS:

    Fargate runs each pod on its own isolated micro-VM. You cannot control node IPs or configure NAT at the infrastructure level. QuotaGuard works because your pod configures the proxy internally.

    # Works on Fargate - no infrastructure changes needed
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
    spec:
      template:
        spec:
          containers:
          - name: my-app
            image: my-app:latest
            env:
            - name: HTTP_PROXY
              valueFrom:
                secretKeyRef:
                  name: quotaguard-proxy
                  key: QUOTAGUARDSTATIC_URL
            - name: HTTPS_PROXY
              valueFrom:
                secretKeyRef:
                  name: quotaguard-proxy
                  key: QUOTAGUARDSTATIC_URL
            - name: NO_PROXY
              value: "localhost,127.0.0.1,.svc.cluster.local,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16"
    

    GKE Autopilot:

    Same approach. Autopilot manages nodes automatically. QuotaGuard configuration stays the same regardless of where Google schedules your pods.

    Testing Your Implementation

    Verify your static IP configuration by requesting ip.quotaguard.com:

    Quick Test Pod

    kubectl run test-proxy --rm -it --restart=Never \
      --env="HTTP_PROXY=http://username:password@us-east-static-01.quotaguard.com:9293" \
      --env="HTTPS_PROXY=http://username:password@us-east-static-01.quotaguard.com:9293" \
      --image=curlimages/curl -- \
      curl -s https://ip.quotaguard.com
    

    Expected response:

    {"ip":"52.34.188.175"}
    

    The returned IP should match one of your two static IPs shown in the QuotaGuard dashboard. Run it multiple times to see both IPs in action (load-balanced).

    Test from Running Pod

    kubectl exec -it my-app-pod-xyz -n your-namespace -- \
      curl -s https://ip.quotaguard.com
    

    Application-Level Test

    Add a health check endpoint that verifies proxy configuration:

    # Python/Flask example
    import os
    import requests
    from flask import Flask, jsonify
    
    app = Flask(__name__)
    
    @app.route('/health/proxy')
    def proxy_health():
        proxy_url = os.environ.get('QUOTAGUARDSTATIC_URL')
        if not proxy_url:
            return jsonify({'error': 'Proxy not configured'}), 500
        
        proxies = {'http': proxy_url, 'https': proxy_url}
        
        try:
            response = requests.get('https://ip.quotaguard.com', proxies=proxies, timeout=10)
            return jsonify({
                'static_ip': response.json()['ip'],
                'proxy_configured': True
            })
        except Exception as e:
            return jsonify({'error': str(e)}), 500
    

    Latency Considerations

    Using QuotaGuard adds a network hop to your requests:

    ConfigurationAdded Latency
    Same region (cluster + proxy)10-30ms
    Cross-region50-100ms

    Recommendations:

    1. Match regions: Deploy QuotaGuard proxy in the same region as your cluster
    2. Selective proxying: Only route traffic that requires static IPs through the proxy. Use NO_PROXY to keep internal and public traffic direct.
    3. Connection pooling: QuotaGuard handles connection pooling. You do not need to limit concurrent connections from your pods.

    Troubleshooting

    407 Proxy Authentication Required

    Your credentials are incorrect. Verify the username and password in your Secret:

    kubectl get secret quotaguard-proxy -n your-namespace -o jsonpath='{.data.QUOTAGUARDSTATIC_URL}' | base64 -d
    

    Connection Timeout to External Services

    1. Verify your pods can reach external networks (check NetworkPolicies)
    2. Check that the QuotaGuard proxy hostname is correct
    3. Ensure port 9293 is not blocked by your cluster’s egress policies

    Internal Services Unreachable (Service-to-Service Calls Failing)

    Your NO_PROXY configuration is missing or incorrect. Internal cluster traffic is routing through the proxy.

    1. Verify the ConfigMap contains correct CIDRs
    2. Check that your deployment references the ConfigMap
    3. Verify the CIDRs match your cluster’s pod and service ranges
    # Check your cluster's CIDRs
    kubectl cluster-info dump | grep -E "(cluster-cidr|service-cluster-ip-range)"
    

    Wrong IP Address Returned

    The proxy may not be configured correctly:

    1. Verify environment variables are set in the pod: kubectl exec pod-xyz -- env | grep PROXY
    2. Check that your HTTP client respects proxy environment variables
    3. For Node.js, ensure you are using explicit proxy configuration

    QGTunnel Sidecar Not Working

    1. Check sidecar logs: kubectl logs pod-xyz -c qgtunnel
    2. Verify the .qgtunnel config file is mounted correctly
    3. Ensure QUOTAGUARDSTATIC_URL secret is accessible to the sidecar
    4. Enable debug logging by setting QGTUNNEL_DEBUG=true

    QuotaGuard Static vs QuotaGuard Shield

    QuotaGuard offers two products for static IPs:

    FeatureQuotaGuard StaticQuotaGuard Shield
    ProtocolHTTP/SOCKS5HTTPS/SOCKS5 over TLS
    EncryptionStandard proxySSL Passthrough (E2EE)
    Best forGeneral API accessHIPAA, PCI-DSS, regulated data
    Starting price$19/month$69/month

    For most Kubernetes workloads, QuotaGuard Static provides everything you need. Choose Shield if you are handling protected health information (PHI), payment card data, or have specific compliance requirements where the proxy must not be able to inspect traffic.


    Ready to Get Started?

    Get in touch or create a free trial account.

    Try QuotaGuard Now

    View Kubernetes Integration Features

    Contact Support


    Ready to Get Started?

    Get in touch or create a free trial account