
Residential Proxies for TCG Sourcing: Why I Built My Own Stack
April 20, 2025
Residential Proxies for TCG Sourcing
If you're running an inventory engine that pulls live market prices from TCGPlayer for hundreds of SKUs every minute, you'll hit a wall fast. The wall is rate-limiting. The wall after that is IP banning. The wall after that is your scraper returning 403s with no path forward.
The fix is residential proxies — IPs that look like normal home internet connections, rotated frequently, sourced from real consumer ISPs. Datacenter proxies don't cut it because the major retail platforms can identify those IP ranges and treat them as bots by default.
What I built
evomi-proxy-tool is a self-hosted dashboard that sits on top of Evomi's residential proxy network. It does four things:
- Generates rotating proxy URLs for use in scrapers. Pass in a target country/region and a session duration, get back a proxy string ready to drop into
axiosorpuppeteer. - Tests reachability before I trust a proxy with real workload. The dashboard hits a known endpoint through each proxy and reports latency + IP info.
- Manages IP whitelists. Evomi requires whitelisting your origin IP — when I deploy from a new build server, I need to update that list, and the dashboard automates it.
- Surfaces account/usage analytics. Bandwidth burned, requests run, error rate by region — the kind of stuff you need to know before your invoice surprises you.
Why a custom dashboard
Most proxy providers ship their own UIs, but they're optimized for their full feature set, not for the specific workflows I run. I wanted three things their dashboard didn't give me:
- A one-click "regenerate this proxy URL with a new session" button
- Reachability testing that returns latency, response codes, and the apparent IP back to me in one row
- A view that mixes proxy state with my scraper's actual run history
So I wrapped Evomi's API in a Next.js app, added NextAuth for login (so I can share access with the team without sharing the master API key), and shipped it.
The architecture
┌────────────────┐ ┌──────────────────┐ ┌──────────────┐
│ pricing engine │ → │ proxy URL gen │ → │ Evomi API │
│ (server-side) │ │ (Next.js route) │ │ (residential)│
└────────────────┘ └──────────────────┘ └──────────────┘
│
↓
┌────────────────┐
│ TCGPlayer mpapi│
└────────────────┘
The pricing engine asks the proxy URL endpoint for a fresh proxy, makes its TCGPlayer request through it, parses the response, and writes the result back to the inventory API. If the request fails, we mark the proxy bad and rotate. If it succeeds, we cache the result for 60 seconds and move on.
The unsexy lessons
- Sticky sessions matter. If you rotate IPs mid-session, some endpoints get suspicious. Evomi lets you bind a session for up to 30 minutes, which is the right amount of stickiness.
- Don't share proxies across workloads. I learned the hard way that running the inventory scraper and a competitive-monitoring crawler through the same IP burns it twice as fast.
- Track burn rate per scraper, not just total. When my bandwidth bill jumped 40% one month, the per-scraper view found a single bug in 10 minutes that would've taken me an hour to find otherwise.
This stack is what makes the rest of SantahsCards' tooling possible. Without reliable scraping, there's no live pricing. Without live pricing, the wholesale model doesn't work.