Proxy Error Codes: Complete Reference
HTTP 429 (Too Many Requests) accounts for ~40% of all proxy failures, followed by 403 Forbidden at ~25% and timeout errors at ~20%. This reference covers every proxy error category: HTTP status codes, SOCKS5 reply codes, SSL/TLS failures, DNS leaks, and connection errors.
Each error includes causes, code examples, and fixes. Mobile proxies achieve 5-10% overall failure rates compared to 40-60% for datacenter proxies due to CGNAT trust mechanics (RFC 6598).
What this reference covers:
Navigate This Reference
Complete proxy error code reference with causes, code examples, debugging tools, and best practices.
Reading time: ~25 minutes. Covers 8 HTTP error codes, 4 SOCKS5 errors, 3 SSL/TLS errors, 5 connection/DNS errors, 4 debugging tools, and 4 best practice patterns.
HTTP Proxy Error Codes
HTTP proxy errors use standard status codes (RFC 7231, RFC 6585, RFC 7235). These 8 codes account for 90%+ of all HTTP-level proxy failures. Each entry includes causes, fixes, and tested code examples.
Proxy Authentication Required
The proxy server requires authentication before forwarding requests. Defined in RFC 7235 Section 3.2. The proxy MUST send a Proxy-Authenticate header field containing a challenge applicable to the proxy. Unlike 401 (which authenticates with the origin server), 407 authenticates with the proxy itself.
407 vs 401: HTTP 401 means the origin server rejected your credentials. HTTP 407 means the proxy server rejected your credentials. With proxy chains, you can get both: 407 from the proxy and 401 from the target site, requiring separate credential sets.
Common Causes:
- Missing Proxy-Authorization header in the request
- Incorrect username/password for the proxy
- IP address not whitelisted in the proxy provider dashboard
- Expired proxy subscription or exceeded bandwidth quota
- Using HTTP proxy URL format for a SOCKS5 proxy (protocol mismatch)
Fix (with code):
# Test proxy authentication with curl
curl -x http://user:pass@proxy.coronium.io:8080 \
-v https://httpbin.org/ip
# Python requests with proxy auth
import requests
proxies = {
"http": "http://user:pass@proxy.coronium.io:8080",
"https": "http://user:pass@proxy.coronium.io:8080"
}
response = requests.get("https://httpbin.org/ip", proxies=proxies)
# Node.js with proxy-agent
import { HttpsProxyAgent } from 'https-proxy-agent';
const agent = new HttpsProxyAgent('http://user:pass@proxy.coronium.io:8080');
const res = await fetch('https://httpbin.org/ip', { agent });Forbidden
The target server understood the request but refuses to authorize it. In proxy contexts, this typically means the target site detected and blocked the proxy IP. This is the second most common proxy error after 429. Cloudflare, Akamai, and DataDome return 403 when they classify traffic as bot activity based on IP reputation, TLS fingerprint, or behavioral analysis.
Common Causes:
- Proxy IP is on a blacklist (common with datacenter IPs)
- Target site geo-restricts content and proxy IP is in a blocked region
- Bot detection system (Cloudflare, Akamai) flagged the request
- JA3/JA4 TLS fingerprint mismatch (Python requests vs. real browser)
- Too many requests from the same IP triggered an automatic ban
- User-Agent string is missing, outdated, or blacklisted
Fix (with code):
# Rotate to a new proxy IP on 403
import requests
from itertools import cycle
proxy_pool = cycle([
"http://user:pass@proxy1.coronium.io:8080",
"http://user:pass@proxy2.coronium.io:8080",
"http://user:pass@proxy3.coronium.io:8080",
])
def fetch_with_rotation(url, max_retries=3):
for attempt in range(max_retries):
proxy = next(proxy_pool)
try:
resp = requests.get(url, proxies={"https": proxy}, timeout=30,
headers={"User-Agent": "Mozilla/5.0 (Linux; Android 14) Chrome/126.0"})
if resp.status_code != 403:
return resp
# 403 -> rotate IP
print(f"403 on attempt {attempt+1}, rotating IP...")
except requests.RequestException:
continue
return NoneToo Many Requests
The most common proxy error. Defined in RFC 6585 Section 4. The server is rate-limiting requests from the proxy IP. The response SHOULD include a Retry-After header indicating how long to wait. In practice, most sites return 429 without Retry-After, requiring exponential backoff. Google returns 429 after ~100 requests/IP/hour. Amazon after 30-50 requests. LinkedIn after 1-5 requests per IP.
Common Causes:
- Exceeding the target site rate limit from a single proxy IP
- Multiple users sharing the same proxy IP (common with datacenter pools)
- No delay between requests (burst scraping)
- Proxy provider rate-limiting your account bandwidth
- Fixed request intervals creating a detectable pattern
Fix (with code):
# Exponential backoff with jitter on 429
import time
import random
import requests
def fetch_with_backoff(url, proxy, max_retries=5):
for attempt in range(max_retries):
resp = requests.get(url, proxies={"https": proxy}, timeout=30)
if resp.status_code == 429:
# Check Retry-After header
retry_after = resp.headers.get("Retry-After")
if retry_after:
wait = int(retry_after)
else:
# Exponential backoff: 2, 4, 8, 16, 32 seconds + jitter
wait = (2 ** attempt) + random.uniform(0, 1)
print(f"429 rate limited. Waiting {wait:.1f}s (attempt {attempt+1})")
time.sleep(wait)
else:
return resp
# After max retries, rotate proxy IP
return NoneBad Gateway
The proxy server received an invalid response from the upstream (target) server. In proxy contexts, this means the proxy successfully connected to the target, but the target returned something the proxy could not parse as a valid HTTP response. Common when the target server crashes mid-response, returns malformed headers, or closes the connection unexpectedly. Also occurs when proxy software (Squid, Nginx, HAProxy) has misconfigured upstream definitions.
Common Causes:
- Target server crashed or returned a malformed response
- Network interruption between the proxy and the target server
- Proxy software buffer overflow on large responses
- SSL/TLS version mismatch between proxy and target
- Target server closed the connection before sending a complete response (connection: close race)
Fix (with code):
# Retry on 502 with a different proxy
import requests
import random
def handle_502(url, proxies_list, max_retries=3):
for attempt in range(max_retries):
proxy = random.choice(proxies_list)
try:
resp = requests.get(url, proxies={"https": proxy}, timeout=30)
if resp.status_code == 502:
print(f"502 Bad Gateway via {proxy}, trying different proxy...")
continue
return resp
except requests.exceptions.ConnectionError:
continue
return NoneService Unavailable
The target server is temporarily unable to handle the request. Defined in RFC 7231. In proxy contexts, Cloudflare returns 503 with a "challenge page" when it suspects bot activity but wants to give the client a chance to prove it is human. Legitimate 503s mean the target server is overloaded or in maintenance mode. Cloudflare-specific: a 503 with "cf-chl-bypass" in the response body indicates a Turnstile/JS challenge, not a true server error.
Common Causes:
- Target server is overloaded or under maintenance
- Cloudflare is serving a JavaScript challenge page (check response body)
- Proxy provider is temporarily out of available IPs in the requested region
- CDN-level rate limiting applied to the proxy IP range
- Server-side DDoS protection triggering on proxy traffic patterns
Fix (with code):
# Distinguish real 503 from Cloudflare challenge
import requests
def handle_503(url, proxy):
resp = requests.get(url, proxies={"https": proxy}, timeout=30)
if resp.status_code == 503:
# Check if it's a Cloudflare challenge
if "cf-chl-bypass" in resp.text or "challenge-platform" in resp.text:
print("Cloudflare JS challenge detected -> use browser automation")
# Switch to Playwright/Puppeteer with mobile proxy
return None
elif "Retry-After" in resp.headers:
wait = int(resp.headers["Retry-After"])
print(f"Server maintenance, retry after {wait}s")
return None
else:
print("503: server overloaded, retry with backoff")
return None
return respGateway Timeout
The proxy did not receive a timely response from the target server. Defined in RFC 7231 Section 6.6.5. The proxy waited for the upstream server to respond but the connection timed out. Default timeout varies: Nginx defaults to 60s (proxy_read_timeout), Squid defaults to 30-120s (read_timeout). In scraping, this often means the target server is deliberately slow-responding to suspected bots (tarpitting).
Common Causes:
- Target server is slow or unresponsive (overloaded)
- Network latency between proxy and target exceeds timeout
- Target server is tarpitting suspected bot IPs (intentional slow response)
- Proxy timeout setting is too low for the target response time
- DNS resolution delays at the proxy level
- Large response payload exceeding proxy buffer or timeout
Fix (with code):
# Handle 504 with increased timeout and fallback
import requests
def handle_504(url, proxy, initial_timeout=30):
timeouts = [initial_timeout, 60, 120] # Escalating timeouts
for timeout in timeouts:
try:
resp = requests.get(url, proxies={"https": proxy}, timeout=timeout)
if resp.status_code == 504:
print(f"504 timeout at {timeout}s, increasing...")
continue
return resp
except requests.exceptions.Timeout:
print(f"Client-side timeout at {timeout}s")
continue
# All timeouts exhausted -> try different proxy (different route)
print("All timeouts exhausted, rotate proxy")
return NoneBad Request
The target server cannot process the request due to malformed syntax. In proxy contexts, this often occurs when the proxy modifies the request in a way that breaks it: adding malformed headers, double-encoding URLs, or corrupting the request body. Also common when using HTTP CONNECT tunneling with incorrect host:port format.
Common Causes:
- Proxy adding malformed or duplicate headers (e.g., double Via header)
- URL encoding issues when proxy rewrites the request path
- Incorrect HTTP CONNECT tunnel format (missing port or wrong host)
- Request body corruption during proxy forwarding
- HTTP/2 downgrade to HTTP/1.1 losing required pseudo-headers
Fix (with code):
# Debug 400 errors by comparing direct vs proxied requests
import requests
url = "https://httpbin.org/headers"
proxy = "http://user:pass@proxy.coronium.io:8080"
# Direct request
direct = requests.get(url, timeout=30)
print("Direct headers:", direct.json())
# Proxied request
proxied = requests.get(url, proxies={"https": proxy}, timeout=30)
print("Proxied headers:", proxied.json())
# Compare to identify proxy-added/modified headers
# Look for: Via, X-Forwarded-For, X-Proxy-ID headersUnauthorized (vs 407)
The target server requires authentication. Different from 407: HTTP 401 authenticates with the origin server via the Authorization header. HTTP 407 authenticates with the proxy via the Proxy-Authorization header. When using proxies to access authenticated APIs, you need both: Proxy-Authorization for the proxy and Authorization for the target. Common confusion: getting 401 and thinking it is a proxy issue when it is actually the target rejecting credentials.
Common Causes:
- Target API requires Bearer token or Basic auth, not provided
- Confusing 401 (target auth) with 407 (proxy auth)
- Proxy stripping the Authorization header during forwarding
- OAuth token expired and proxy is caching the old request
- Target server rejecting credentials from proxy IP range (IP-bound tokens)
Fix (with code):
# Handle both proxy auth (407) and target auth (401)
import requests
proxy = "http://proxy_user:proxy_pass@proxy.coronium.io:8080"
api_url = "https://api.example.com/data"
api_token = "Bearer your_api_token_here"
resp = requests.get(
api_url,
proxies={"https": proxy}, # Proxy auth in URL -> Proxy-Authorization
headers={"Authorization": api_token}, # Target auth -> Authorization
timeout=30,
)
if resp.status_code == 407:
print("Proxy credentials invalid -> check proxy user/pass")
elif resp.status_code == 401:
print("Target API credentials invalid -> check API token")
elif resp.status_code == 200:
print("Success:", resp.json())SOCKS5 Error Codes
SOCKS5 (RFC 1928) uses its own reply codes in the connection response. The reply field is a single byte: 0x00 means success, any other value indicates an error. These are the codes you will encounter in practice.
SOCKS5 Authentication Failure
The SOCKS5 proxy rejected the client authentication credentials. SOCKS5 (RFC 1928) supports multiple auth methods: 0x00 (no auth), 0x02 (username/password per RFC 1929), 0x03-0x7F (IANA assigned). The server responds with 0xFF if no acceptable method is found. Username/password auth failure returns status 0x01 (failure) in the sub-negotiation response.
Common Causes:
- Incorrect SOCKS5 username or password
- Client requesting auth method 0x00 (no auth) but server requires 0x02 (username/password)
- Using HTTP proxy credentials for a SOCKS5 proxy (different credential sets)
- SOCKS5 proxy not configured for username/password auth (only IP whitelist)
Fix:
# Test SOCKS5 auth with curl
curl --socks5 user:pass@proxy.coronium.io:1080 \
https://httpbin.org/ip -v
# Python with PySocks
import socks
import socket
socks.set_default_proxy(
socks.SOCKS5, "proxy.coronium.io", 1080,
username="user", password="pass"
)
socket.socket = socks.socksocket
# Or with requests + socks
import requests
proxies = {"https": "socks5://user:pass@proxy.coronium.io:1080"}
resp = requests.get("https://httpbin.org/ip", proxies=proxies)Connection Refused by Destination
The SOCKS5 proxy successfully connected to the target IP but the target actively refused the TCP connection (RST packet). This means the target host is reachable but nothing is listening on the requested port. Defined in RFC 1928 reply field 0x05.
Common Causes:
- Target service is not running on the specified port
- Target firewall is actively rejecting connections from the proxy IP range
- Wrong port number in the SOCKS5 CONNECT request
- Target server has reached its maximum connection limit
Fix:
# Verify the target port is correct and service is running # First test directly (if possible) curl -v https://target-site.com:443 # Then test through SOCKS5 curl --socks5-hostname user:pass@proxy.coronium.io:1080 \ https://target-site.com:443 -v # Common port issues: # - Using port 80 instead of 443 for HTTPS # - Target redirects to a different port # - Service only listens on specific interfaces
TTL Expired (Network Unreachable)
The SOCKS5 proxy could not route packets to the target host because the IP TTL (Time-To-Live) was exhausted before reaching the destination. In practice, this means the target is unreachable from the proxy network due to routing issues, or the destination IP does not exist. RFC 1928 reply field 0x06.
Common Causes:
- Target server IP address does not exist or is not routed
- Network routing issue between the proxy and the target (ISP problem)
- Target is behind a VPN or private network not accessible from the proxy
- DNS returned a stale/incorrect IP for the target hostname
Fix:
# Diagnose TTL/routing issues # Use --socks5-hostname to let proxy handle DNS curl --socks5-hostname user:pass@proxy.coronium.io:1080 \ https://target-site.com -v # If the above fails, the proxy cannot reach the target # Try a proxy in a different region/network curl --socks5-hostname user:pass@proxy-us.coronium.io:1080 \ https://target-site.com -v # Verify DNS resolution at proxy level # (SOCKS5 with -hostname flag delegates DNS to proxy)
Host Unreachable
The proxy received an ICMP "host unreachable" message when trying to connect to the target. Different from TTL expired: host unreachable means the last-hop router cannot reach the target, while TTL expired means the packet ran out of hops. RFC 1928 reply field 0x04.
Common Causes:
- Target server is offline or has no route from the proxy network
- Target hostname resolved to a private/internal IP (10.x, 192.168.x)
- ISP-level blocking of the target IP from the proxy region
- Target server firewall dropping all packets (not even sending RST)
Fix:
# Test with multiple proxy regions to isolate network issue
regions = ["us", "de", "uk", "sg"]
for region in regions:
proxy = f"socks5://user:pass@proxy-{region}.coronium.io:1080"
try:
resp = requests.get(target_url, proxies={"https": proxy}, timeout=15)
print(f"{region}: {resp.status_code}")
except Exception as e:
print(f"{region}: {e}")
# If all fail -> target is truly down
# If some succeed -> regional routing issueSSL/TLS Proxy Errors
SSL/TLS errors occur at the encryption layer, before any HTTP data is exchanged. These are increasingly common as anti-bot systems use TLS fingerprinting (JA3/JA4) to identify and block non-browser clients.
SSL Handshake Failure
The TLS handshake between the client and the proxy (or proxy and target) failed. This occurs when the client and server cannot agree on a common cipher suite, TLS version, or when certificate verification fails. With HTTPS proxies using CONNECT tunneling, the TLS handshake happens directly between the client and the target through the proxy tunnel. With MITM proxies (Charles, mitmproxy), the proxy terminates TLS and re-encrypts, requiring the client to trust the proxy CA certificate.
Common Causes:
- TLS version mismatch (client requires TLS 1.3, proxy only supports TLS 1.2)
- No common cipher suite between client and proxy/target
- Proxy MITM certificate not trusted by the client
- Client certificate required by target but not provided through proxy
- SNI (Server Name Indication) mismatch in the TLS ClientHello
Fix:
# Debug TLS handshake with openssl
openssl s_client -connect proxy.coronium.io:8080 \
-servername proxy.coronium.io -tls1_3
# Test through proxy with curl verbose TLS output
curl -x http://user:pass@proxy.coronium.io:8080 \
https://target-site.com -v --tlsv1.3
# Python: force TLS version
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.ssl_ import create_urllib3_context
class TLS13Adapter(HTTPAdapter):
def init_poolmanager(self, *args, **kwargs):
ctx = create_urllib3_context()
ctx.minimum_version = ssl.TLSVersion.TLSv1_3
kwargs["ssl_context"] = ctx
return super().init_poolmanager(*args, **kwargs)Certificate Mismatch
The SSL certificate presented does not match the expected hostname. Common with HTTPS proxies that perform SSL interception (MITM). The proxy presents its own certificate for the target domain, but if the proxy CA is not in the client trust store, the client rejects it. Also occurs when proxy software (Squid SSL Bump, mitmproxy) has an expired or misconfigured CA certificate.
Common Causes:
- MITM proxy presenting a certificate for a different domain
- Proxy CA certificate not installed in the client trust store
- Proxy SSL interception certificate has expired
- Wildcard certificate does not cover the subdomain being accessed
- IP-based proxy access without SNI support
Fix:
# For MITM proxies (mitmproxy, Charles), install the CA cert
# mitmproxy: ~/.mitmproxy/mitmproxy-ca-cert.pem
# Python: trust custom CA
import requests
resp = requests.get(
"https://target.com",
proxies={"https": "http://proxy:8080"},
verify="/path/to/proxy-ca-cert.pem" # Custom CA bundle
)
# Or disable verification (development only, NEVER in production)
# resp = requests.get(url, proxies=proxies, verify=False)
# Node.js: trust custom CA
# NODE_EXTRA_CA_CERTS=/path/to/proxy-ca.pem node script.jsJA3/JA4 TLS Fingerprint Rejection
Not a traditional SSL error but the most impactful TLS-level block in 2026. JA3 (created by Salesforce in 2017) fingerprints the TLS ClientHello by hashing: TLS version, cipher suites, extensions, elliptic curves, and EC point formats. JA4 (2023) extends this with additional parameters. Cloudflare, Akamai, and DataDome compare the JA3/JA4 hash against known browser fingerprints. Python requests produces a JA3 hash that is immediately distinguishable from Chrome, causing instant 403 blocks regardless of IP quality.
Common Causes:
- Using Python requests/httpx (non-browser TLS fingerprint)
- Using curl without browser impersonation (default curl JA3)
- Headless browser with TLS configuration modified by automation tools
- Outdated browser version producing a deprecated JA3 hash
- Go net/http or Java HttpClient producing non-browser fingerprints
Fix:
# Solution 1: Use curl_cffi to impersonate browser TLS
from curl_cffi import requests as cffi_requests
resp = cffi_requests.get(
"https://target.com",
impersonate="chrome", # Matches Chrome JA3/JA4
proxies={"https": "http://user:pass@proxy.coronium.io:8080"}
)
# Solution 2: Use Playwright for authentic browser TLS
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch()
context = browser.new_context(
proxy={"server": "http://proxy.coronium.io:8080",
"username": "user", "password": "pass"}
)
page = context.new_page()
page.goto("https://target.com") # Real Chrome JA3
# Verify your JA3 hash: https://tls.peet.ws/api/allJA3/JA4 Is the #1 TLS-Level Block in 2026
Traditional SSL errors (handshake failure, certificate mismatch) are straightforward to fix. JA3/JA4 fingerprint rejection is far more impactful because it is invisible -- you receive a normal HTTP 403, not an SSL error. The block happens at the TLS handshake level before any HTTP data is sent. Cloudflare, Akamai, and DataDome all use JA3/JA4 to reject non-browser clients regardless of IP quality. Always verify your TLS fingerprint at https://tls.peet.ws/api/all when debugging unexplained 403 errors.
Connection & DNS Errors
These errors occur at the TCP/IP and DNS layers, below HTTP. They indicate network-level problems: the proxy is unreachable, the connection was dropped, or DNS resolution failed. DNS leaks are a special category that compromise anonymity without causing visible errors.
Connection Refused
The proxy server actively refused the TCP connection. The client received a TCP RST packet in response to its SYN. This means the proxy host is reachable but nothing is listening on the specified port. Different from a timeout (no response at all).
Common Causes:
- Proxy service is not running on the specified port
- Firewall on the proxy host is actively rejecting connections
- Wrong proxy port number (e.g., using HTTP port 8080 for SOCKS5 on 1080)
- Proxy server reached its maximum connection limit
- Proxy IP address is correct but the service crashed
Fix:
# Verify proxy is listening # Test TCP connection with nc (netcat) nc -zv proxy.coronium.io 8080 # Test with curl verbose curl -x http://user:pass@proxy.coronium.io:8080 \ https://httpbin.org/ip -v # Common port assignments: # HTTP proxy: 8080, 3128, 80 # SOCKS5: 1080, 1085 # HTTPS proxy: 8443
Connection Timed Out
The TCP connection to the proxy server never completed. No SYN-ACK was received within the timeout period. This means the proxy host is either unreachable, the port is filtered (firewall drops packets silently), or network congestion is preventing the connection. Different from ECONNREFUSED (which gets an immediate rejection).
Common Causes:
- Proxy server is down or network is unreachable
- Firewall silently dropping packets (no RST, no ICMP unreachable)
- Network congestion between client and proxy
- Proxy IP has changed and DNS has not updated
- ISP blocking the proxy port (common in corporate networks)
Fix:
# Diagnose timeout: is it the proxy or the target?
import requests
import time
proxy = "http://user:pass@proxy.coronium.io:8080"
# Step 1: Test proxy connectivity (fast target)
start = time.time()
try:
resp = requests.get("https://httpbin.org/ip",
proxies={"https": proxy}, timeout=10)
print(f"Proxy OK: {time.time()-start:.1f}s")
except requests.exceptions.ConnectTimeout:
print("ETIMEDOUT: Cannot reach proxy server")
except requests.exceptions.ReadTimeout:
print("Proxy connected but target timed out")
# Step 2: If proxy works, test target directly
try:
resp = requests.get("https://target-site.com", timeout=10)
print("Target reachable directly -> proxy routing issue")
except:
print("Target unreachable directly too -> target is down")Connection Reset by Peer
The established TCP connection was forcibly closed by the remote side (proxy or target) by sending a RST packet. This can happen mid-transfer, unlike ECONNREFUSED which happens at connection time. In proxy contexts, this often means the proxy or target terminated the connection due to suspicious traffic, exceeded limits, or protocol violations.
Common Causes:
- Proxy server forcibly closing connections from banned IPs
- Target server resetting connections from proxy IP ranges
- Keep-alive connection expired while idle
- SSL/TLS version negotiation failure mid-handshake
- Proxy connection pool recycling connections aggressively
- Intermediate firewall/IDS terminating the connection
Fix:
# Handle ECONNRESET with retry and fresh connections
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
session = requests.Session()
retry = Retry(
total=3,
backoff_factor=1, # 1s, 2s, 4s
status_forcelist=[502, 503, 504],
allowed_methods=["GET", "POST"],
)
adapter = HTTPAdapter(
max_retries=retry,
pool_connections=10,
pool_maxsize=10,
)
session.mount("https://", adapter)
session.mount("http://", adapter)
# Disable keep-alive to force fresh connections
session.headers["Connection"] = "close"
resp = session.get(url, proxies={"https": proxy}, timeout=30)DNS Resolution Leak
DNS queries bypass the proxy and go directly to the local DNS resolver, revealing the client real IP to the DNS server. This defeats the anonymity purpose of the proxy. With HTTP proxies using CONNECT tunneling, DNS is typically resolved by the client before connecting through the proxy. With SOCKS5, DNS resolution can be delegated to the proxy (SOCKS5h / --socks5-hostname) or handled locally (SOCKS5 / --socks5).
Common Causes:
- Using SOCKS5 instead of SOCKS5h (DNS resolved locally)
- WebRTC leaking real IP in browser-based scraping
- System DNS resolver ignoring proxy settings (OS-level bypass)
- Browser DNS prefetching resolving domains directly
- Split tunneling configuration excluding DNS traffic from proxy
Fix:
# SOCKS5 vs SOCKS5h: the DNS leak difference
# SOCKS5 (local DNS, LEAKS):
proxies = {"https": "socks5://user:pass@proxy:1080"}
# SOCKS5h (proxy-side DNS, SAFE):
proxies = {"https": "socks5h://user:pass@proxy:1080"}
# curl: --socks5 vs --socks5-hostname
# LEAKS DNS:
curl --socks5 user:pass@proxy:1080 https://target.com
# SAFE (proxy resolves DNS):
curl --socks5-hostname user:pass@proxy:1080 https://target.com
# Test for DNS leak:
# Visit https://dnsleaktest.com through your proxy
# If you see your ISP DNS servers -> DNS is leakingDNS Resolution Failed Through Proxy
The proxy server failed to resolve the target hostname to an IP address. When using SOCKS5h or HTTP CONNECT tunneling, the proxy handles DNS resolution. If the proxy DNS server is misconfigured, overloaded, or the domain does not exist, the connection fails before any HTTP request is made.
Common Causes:
- Target domain does not exist (typo in URL)
- Proxy DNS server is down or overloaded
- Proxy DNS server does not have the domain in its cache and recursive lookup fails
- DNS-level censorship in the proxy server region
- Domain recently registered and DNS propagation has not reached the proxy DNS
Fix:
# Debug DNS resolution through proxy # Test if proxy can resolve the domain curl --socks5-hostname user:pass@proxy:1080 \ https://target-site.com -v 2>&1 | grep "Resolving" # Alternative: use a known-working domain first curl --socks5-hostname user:pass@proxy:1080 \ https://httpbin.org/ip -v # If httpbin works but target doesn't: # -> DNS issue specific to that domain at proxy location # Try proxy in a different region
Debugging Proxy Errors
Four tools for diagnosing proxy errors at every level: CLI (curl), interception (mitmproxy), browser (DevTools), and GUI (Charles Proxy). Start with curl for quick tests, use mitmproxy for deep inspection.
curl with -x flag
The fastest way to test proxy connectivity, authentication, and response headers. The -x flag sets the proxy, -v enables verbose output showing the full TCP and TLS handshake, and --proxy-header adds custom proxy headers. Supports HTTP, HTTPS, SOCKS4, and SOCKS5 proxies.
# HTTP proxy test
curl -x http://user:pass@proxy:8080 https://httpbin.org/ip -v
# SOCKS5 proxy test (proxy-side DNS)
curl --socks5-hostname user:pass@proxy:1080 https://httpbin.org/ip -v
# Show response headers only
curl -x http://user:pass@proxy:8080 -I https://target.com
# Test with specific TLS version
curl -x http://user:pass@proxy:8080 --tlsv1.3 https://target.com -v
# Time the connection phases
curl -x http://user:pass@proxy:8080 -w "dns: %{time_namelookup}s\nconnect: %{time_connect}s\ntls: %{time_appconnect}s\ntotal: %{time_total}s\n" -o /dev/null -s https://target.commitmproxy
Open-source interactive HTTPS proxy (20K+ GitHub stars) for inspecting, modifying, and replaying HTTP/HTTPS traffic. Run mitmproxy between your scraping client and the upstream proxy to see exactly what requests and responses look like. Shows full request/response headers, body content, TLS certificate details, and timing. Supports scripting with Python for automated traffic modification.
# Install mitmproxy pip install mitmproxy # Run as intercepting proxy on port 9090 mitmproxy --listen-port 9090 \ --mode upstream:http://user:pass@proxy.coronium.io:8080 # Your scraper connects to localhost:9090 # mitmproxy forwards to your actual proxy # You see all traffic in the mitmproxy TUI # Dump mode (non-interactive, log to file) mitmdump --listen-port 9090 \ --mode upstream:http://proxy:8080 -w traffic.flow # Filter specific domains mitmproxy --listen-port 9090 \ --mode upstream:http://proxy:8080 \ --set intercept="~d target-site.com"
Browser DevTools Network Tab
Chrome/Firefox DevTools Network tab shows all HTTP requests, response codes, timing waterfall, request/response headers, and TLS certificate info. When using browser-based scraping (Playwright/Puppeteer), enable DevTools to see exactly which requests fail and why. Filter by status code (e.g., "status-code:403") to find blocked requests. The Timing tab shows DNS, TCP connect, TLS handshake, and content download breakdown.
# Playwright: capture network events programmatically
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch()
context = browser.new_context(
proxy={"server": "http://proxy:8080",
"username": "user", "password": "pass"}
)
page = context.new_page()
# Log all failed requests
page.on("requestfailed", lambda req:
print(f"FAILED: {req.url} -> {req.failure}"))
# Log all responses with status codes
page.on("response", lambda resp:
print(f"{resp.status} {resp.url}") if resp.status >= 400 else None)
page.goto("https://target-site.com")Charles Proxy
Commercial HTTP debugging proxy with a visual interface for inspecting SSL/TLS traffic. SSL Proxying mode decrypts HTTPS traffic by acting as a MITM, allowing you to see the full request and response body. Breakpoint feature lets you pause and modify requests in real-time. Map Remote/Local features let you redirect requests. Throttle feature simulates slow connections. $50 license, 30-day free trial. Available on macOS, Windows, Linux.
# Charles Proxy setup for proxy debugging: # 1. Set Charles as system proxy (Proxy -> macOS/Windows Proxy) # 2. Install Charles CA certificate (Help -> SSL Proxying -> Install Certificate) # 3. Enable SSL Proxying (Proxy -> SSL Proxying Settings -> Add *.*) # 4. Set External Proxy (Proxy -> External Proxy Settings) # -> Point to your actual proxy: proxy.coronium.io:8080 # 5. Your traffic flows: Browser -> Charles -> Your Proxy -> Target # Charles shows decrypted traffic at each stage # Key features for proxy debugging: # - Sequence view: chronological request list # - Structure view: grouped by domain # - Breakpoints: pause on specific URLs to inspect/modify # - Repeat: replay requests with different parameters
Debugging Workflow
Step 1: Test proxy connectivity with curl -x proxy:port https://httpbin.org/ip -v. Step 2: If proxy works, test the target directly (without proxy) to isolate the issue. Step 3: If both work independently, use mitmproxy between your client and the proxy to inspect the actual request/response. Step 4: Check TLS fingerprint at tls.peet.ws if getting unexplained 403s.
Error Handling Best Practices
Production-grade proxy error handling requires more than simple retries. These four patterns -- exponential backoff, IP rotation, health checking, and circuit breakers -- form a complete error management system.
Exponential Backoff with Jitter
On 429 or 503 errors, wait exponentially longer between retries: 2s, 4s, 8s, 16s. Add random jitter (0-1s) to prevent thundering herd when multiple scrapers retry simultaneously. Always check the Retry-After header first -- it gives the server-recommended wait time.
def retry_with_backoff(func, max_retries=5):
for attempt in range(max_retries):
try:
result = func()
if result.status_code in [429, 503]:
retry_after = result.headers.get("Retry-After")
wait = int(retry_after) if retry_after else (2 ** attempt) + random.uniform(0, 1)
time.sleep(wait)
continue
return result
except (ConnectionError, Timeout):
time.sleep((2 ** attempt) + random.uniform(0, 1))
raise MaxRetriesExceeded()Rotate IP on 403/429
When a proxy IP receives 403 (banned) or 429 (rate limited), immediately rotate to a different IP. Mark the banned IP with a cooldown period (5-30 minutes for 429, 1-24 hours for 403). Maintain a health score per proxy IP: decrement on errors, increment on successes, remove from pool below threshold.
class ProxyPool:
def __init__(self, proxies):
self.proxies = {p: {"score": 100, "cooldown_until": 0} for p in proxies}
def get_proxy(self):
now = time.time()
available = [p for p, s in self.proxies.items()
if s["score"] > 20 and s["cooldown_until"] < now]
return random.choice(available) if available else None
def report_error(self, proxy, status_code):
if status_code == 403:
self.proxies[proxy]["score"] -= 50
self.proxies[proxy]["cooldown_until"] = time.time() + 3600
elif status_code == 429:
self.proxies[proxy]["score"] -= 20
self.proxies[proxy]["cooldown_until"] = time.time() + 300
def report_success(self, proxy):
self.proxies[proxy]["score"] = min(100, self.proxies[proxy]["score"] + 5)Health Check Before Use
Before sending production traffic through a proxy, verify it is alive with a lightweight health check to a known endpoint (httpbin.org/ip or ip-api.com). Check that the response IP matches the expected proxy IP. Health checks catch expired proxies, wrong credentials, and network issues before they waste time on real requests.
async def health_check(proxy, timeout=5):
try:
resp = requests.get("https://httpbin.org/ip",
proxies={"https": proxy}, timeout=timeout)
if resp.status_code == 200:
ip = resp.json().get("origin")
return {"healthy": True, "ip": ip, "latency_ms": resp.elapsed.total_seconds() * 1000}
except Exception as e:
return {"healthy": False, "error": str(e)}
# Run health checks on all proxies before scraping session
healthy_proxies = []
for proxy in all_proxies:
result = await health_check(proxy)
if result["healthy"] and result["latency_ms"] < 5000:
healthy_proxies.append(proxy)
print(f"{len(healthy_proxies)}/{len(all_proxies)} proxies healthy")Circuit Breaker Pattern
Implement a circuit breaker that "opens" (stops sending requests) when a proxy or target endpoint fails repeatedly. After a cooldown period, allow one test request ("half-open" state). If it succeeds, close the circuit (resume normal traffic). If it fails, keep the circuit open. This prevents wasting requests and bandwidth on known-failing proxies or targets.
class CircuitBreaker:
CLOSED, OPEN, HALF_OPEN = "closed", "open", "half_open"
def __init__(self, failure_threshold=5, recovery_timeout=60):
self.state = self.CLOSED
self.failures = 0
self.threshold = failure_threshold
self.recovery_timeout = recovery_timeout
self.last_failure_time = 0
def can_execute(self):
if self.state == self.CLOSED:
return True
if self.state == self.OPEN:
if time.time() - self.last_failure_time > self.recovery_timeout:
self.state = self.HALF_OPEN
return True # Allow one test request
return False
return True # HALF_OPEN: allow test
def record_success(self):
self.failures = 0
self.state = self.CLOSED
def record_failure(self):
self.failures += 1
self.last_failure_time = time.time()
if self.failures >= self.threshold:
self.state = self.OPENError Frequency by Proxy Type
Error rates vary dramatically by proxy type. Datacenter proxies fail 40-60% of the time on protected sites, while mobile proxies fail only 5-10%. The error distribution also shifts: datacenter proxies get mostly 403 bans, while mobile proxies get mostly transient timeouts.
Datacenter Proxies
40-60% failure rate
ASN instantly reveals datacenter origin
IPs shared across many users
If using HTTP libraries, not browsers
Target tarpitting suspected bots
Target blocking datacenter IP ranges entirely
Residential Proxies
15-30% failure rate
Shared pool, IPs may have reputation from other users
Some IPs flagged from overuse
Residential connections can be slow/unstable
Residential gateways recycling connections
JS challenge on suspicious patterns
Mobile Proxies (4G/5G)
5-10% failure rate
Only at very high request rates per IP
Carrier network latency spikes
Carrier network switching (4G->5G handoff)
CGNAT trust makes IP bans rare
Carrier DNS generally reliable
Mobile Proxies: Lowest Error Rates
Mobile proxies achieve 5-10% overall failure rates because CGNAT (RFC 6598) makes mobile IPs inherently trusted. Anti-bot systems cannot aggressively block mobile IP ranges without also blocking real mobile users. The result: fewer 403 bans, higher rate limit thresholds, and errors that are mostly transient (timeouts, resets) rather than permanent (IP bans).
Frequently Asked Questions
Technical answers to common proxy error questions, covering HTTP vs SOCKS auth, JA3/JA4 fingerprinting, DNS leaks, connection pool management, debugging techniques, and error handling patterns.
Mobile Proxy Plans
Dedicated 4G/5G mobile proxies with 5-10% error rates on sites where datacenter proxies fail 40-60%. CGNAT trust mechanics reduce 403 bans and 429 rate limits. HTTP and SOCKS5 support included.
Configure & Buy Mobile Proxies
Select from 10+ countries with real mobile carrier IPs and flexible billing options
Choose Billing Period
Select the billing cycle that works best for you
SELECT LOCATION
Bulk discount: Save up to 10% when you order 5+ proxy ports
Carrier & Region
Available regions:
Included Features
๐บ๐ธUSA Configuration
AT&T โข Florida โข Monthly Plan
Perfect For
Popular Proxy Locations
Secure payment methods accepted: Credit Card, PayPal, Bitcoin, and more. 2 free modem replacements per 24h.
Puppeteer Proxy Configuration Guide
Step-by-step guide to configuring Puppeteer with rotating proxies, including proxy auth and stealth plugins.
Web Parsing with 4G Mobile Proxies
Technical guide to CGNAT trust, anti-bot bypass, Scrapy/Playwright config, and rate limiting data.
Complete Web Scraping Guide
Comprehensive scraping guide covering frameworks, proxy rotation, and scaling infrastructure.
Reduce Proxy Errors with Mobile IPs
Mobile proxies achieve 5-10% failure rates compared to 40-60% for datacenter proxies. CGNAT trust mechanics (RFC 6598) mean fewer 403 bans, higher 429 thresholds, and mostly transient errors instead of permanent IP blocks.
Compatible with curl, Python requests, Scrapy, Playwright, Puppeteer, and all HTTP/SOCKS5 clients. Unlimited bandwidth with no per-GB billing.