How to Migrate from IP2Location to IP Geo API in 2026: A Step-by-Step Drop-In Guide

7-minute read · 2026 code samples · honest rollback plan

This is the practical companion to the IP2Location alternative comparison → and the head-on review of IP2Location vs IP Geo API →. Those two pages tell you whether to switch. This page tells you how — including the three packaging and field-shape gotchas no other migration guide is honest about.

TL;DR — most IP2Location → IP Geo API migrations land in half an engineering day for REST callers, and roughly one full day for teams currently consuming the BIN/CSV/MMDB downloadable database files. The real work is not the swap itself; it is decommissioning the monthly DB-sync cycle, unbundling the IP2Proxy SKU (proxy / VPN / Tor flags move from a separately-licensed product to inline fields), and converting an annual USD prepay invoice to a monthly EUR subscription with iDEAL / SEPA / Bancontact at checkout.

Who this guide is for

You currently use IP2Location in one of three shapes:

…and you’ve decided that the BIN-sync cycle, the dual-product IP2Proxy license, USD-only annual prepay invoices, and the non-EU vendor HQ (Penang, Malaysia) cost more than they should. You want a REST replacement that:

If those five boxes are unchecked — pause and read the vs comparison → first. The tradeoffs are real, especially if you actively need offline / air-gapped lookups for on-prem deployments, sub-millisecond local-process latency at multi-million-lookup-per-second scale, or PX11-tier fields like weather_station_code, mcc / mnc, iab_category, or elevation we don’t expose.

The 7-step migration checklist

  1. Inventory every call site that hits ip2location.io, api.ip2proxy.io, OR loads a .bin / .csv / .mmdb IP2Location file.
  2. Map your fields to the IP2Location-compatibility response (?format=ip2location).
  3. Add a feature flag so you can switch any call site between providers.
  4. Wire a 60-second cache in front of the API client (in-memory or Redis).
  5. Deploy in shadow mode — call both, log differences, serve IP2Location responses.
  6. Cut over gradually — 10% → 50% → 100% of traffic over 48 hours.
  7. Decommission — cancel both DB-license and IP2Proxy, archive USD annual invoice, drop the BIN-sync cron.

The rest of this post walks each step with copy-paste code.

Step 1 — Inventory call sites

Run this in the repo root before touching anything:

git grep -nE "ip2location|ip2proxy|IP2Location|\\.BIN|\\.bin|IP2LOCATION_KEY" -- ':!*.lock' ':!*.md'

Most teams find 1-6 call sites: one for the main REST / lookup, optionally one for IP2Proxy REST, plus any number of BIN-file loaders (IP2Location.IP2Location("/var/lib/IP2LOCATION-LITE-DB11.BIN"), new IP2Location.IP2Location_init("/data/IP2PROXY-PX10.BIN")). The BIN-loader paths are the higher-leverage swap target — those are the call sites that today require a monthly DB pull, multi-GB redeploy, and version-skew checks across every app server. Audit each one; teams typically find one nightly cron job that nobody owns refreshing the file.

Watch-out: the BIN loaders are usually process-local singletons instantiated at app startup. Hot-reload semantics differ across the official libraries (pyIP2Location re-mmaps lazily; ip2location-nodejs requires a process restart on file replace; the Go module supports Open + Close). The new HTTP client is per-request stateless, so the migration eliminates this entire class of “is the lookup table fresh on every pod after deploy?” questions.

Step 2 — Map the fields

IP2Location’s REST API returns a flat JSON shape with sub-tier fields gated by your DB package:

{
  "ip": "8.8.8.8",
  "country_code": "US",
  "country_name": "United States of America",
  "region_name": "California",
  "city_name": "Mountain View",
  "latitude": 37.405992,
  "longitude": -122.078515,
  "zip_code": "94043",
  "time_zone": "-07:00",
  "asn": "15169",
  "as": "Google LLC",
  "is_proxy": false
}

…and the BIN/CSV local-lookup shape is similar but with package-specific column names. The DB package number determines which fields are populated: DB1 = country only; DB11 = country + region + city + lat/lng + ZIP + timezone; DB23 = +ASN/AS; DB26 = +usage type / category. PX packages (PX1-PX11) gate the proxy block.

IP Geo API ships an ?format=ip2location compatibility shim that returns the same flat shape so most call sites stop noticing the swap. The mapping for the fields ~95% of integrations rely on:

Your old code IP2Location REST / BIN field IP Geo API ?format=ip2location Native ?format=ipgeo
IP ip ip ip
Country code (ISO-2) country_code (REST) / country_short (BIN) country_code country.iso_code
Country name country_name (REST) / country_long (BIN) country_name country.name
Region (state/province) region_name region_name region.name
City city_name city_name location.city
Postal zip_code zip_code location.postal_code
Lat latitude (number) latitude (number) location.lat
Lng longitude (number) longitude (number) location.lng
Time zone time_zone (e.g. "-07:00") time_zone (offset string) location.timezone (IANA name)
ASN asn (string of digits) asn (string of digits) network.asn (integer)
AS / Org name as as network.organization
Proxy / VPN / Tor (PX-tier) is_proxy (boolean) + proxy_type ("VPN"/"TOR"/"DCH") is_proxy + is_vpn + is_tor + is_datacenter (split, free, inline) threat.is_proxy etc.
Usage type (DB24+) usage_type ("COM" / "ISP" / etc.) usage_type (string) network.usage_type

Fields the shim does not cover (documented gaps): domain (DB13+ — reverse DNS lookup at lookup time; we expose this via a separate path on the Business plan), iab_category (PX11 — IAB content categories; specialty ad-tech field, low signal for most use cases), mcc / mnc / mobile_brand (PX10 — mobile carrier codes; consider a dedicated mobile-network vendor if load-bearing), weather_station_code / weather_station_name (PX9 — nearest-weather-station; we do not expose this), elevation (PX8 — geographic elevation in metres; trivially derivable from lat/lng via a free elevation API), address_type ("U" unicast / "A" anycast — niche), and the entire district / category / time_zone_name (IANA-string variant, only on PX11) blocks. If your code reads any of those, list them as blockers and decide per call site whether to drop the dependency or keep IP2Location for that path only (hybrid pattern — see the comparison page →).

Step 3 — Feature flag, then drop-in client

Python (was IP2Location BIN-file singleton)

# before — BIN file loaded at module import, requires monthly file refresh
import IP2Location

DB = IP2Location.IP2Location("/var/lib/IP2LOCATION-DB11.BIN")  # multi-GB file

def lookup_country(ip: str) -> str:
    rec = DB.get_all(ip)
    return rec.country_short

# after — drop-in via the ip2location-compatibility shim, no file
import os, requests
from functools import lru_cache

API_KEY = os.environ["IPGEO_API_KEY"]
USE_IPGEO = os.environ.get("USE_IPGEO_API", "0") == "1"   # feature flag

@lru_cache(maxsize=10_000)
def _lookup(ip: str) -> dict:
    r = requests.get(
        f"https://api.ipgeo.10b.app/v1/{ip}",
        headers={"Authorization": f"Bearer {API_KEY}"},
        params={"format": "ip2location"},
        timeout=2.0,
    )
    r.raise_for_status()
    return r.json()

def lookup_country(ip: str) -> str:
    if USE_IPGEO:
        return _lookup(ip)["country_code"]      # flat shape — no rewrite
    rec = DB.get_all(ip)
    return rec.country_short

Two structural deltas on the migration: (a) the BIN file is gone — no more multi-GB artefact in /var/lib, no more monthly refresh cron, no more IP2Location_BIN_PATH env var, no more du -sh discipline; (b) the API key now lives in an Authorization: Bearer … header that does not appear in URL logs or browser history (IP2Location REST also used ?key=… query-string auth; if you were on the REST product before, treat the existing key as already-leaked across nginx, Cloudflare, APM, and Sentry logs).

Node / TypeScript (was ip2location-nodejs BIN loader)

// before
import IP2Location from "ip2location-nodejs";
const db = new IP2Location();
db.open("/var/lib/IP2LOCATION-DB11.BIN");

const rec = db.getAll(ip);
const country = rec.country_short;

// after — drop-in
const cache = new Map<string, any>();
export async function geoLookup(ip: string) {
  if (process.env.USE_IPGEO_API !== "1") {
    return db.getAll(ip);   // legacy BIN path
  }
  if (cache.has(ip)) return cache.get(ip);
  const r = await fetch(
    `https://api.ipgeo.10b.app/v1/${ip}?format=ip2location`,
    { headers: { Authorization: `Bearer ${process.env.IPGEO_API_KEY!}` } }
  );
  if (!r.ok) throw new Error(`ipgeo ${r.status}`);
  const j = await r.json();
  cache.set(ip, j);
  setTimeout(() => cache.delete(ip), 60_000);   // 60-s TTL
  return j;
}

Go

// after — drop-in via the ip2location-compatibility shim
url := fmt.Sprintf("https://api.ipgeo.10b.app/v1/%s?format=ip2location", ip)
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
req.Header.Set("Authorization", "Bearer "+os.Getenv("IPGEO_API_KEY"))
resp, err := httpClient.Do(req)
// ... unmarshal into your existing IP2Location-shaped struct

Step 4 — Cache layer (the step everyone skips)

A naive 1-call-per-request integration will burn through IP Geo API’s free 1K-req/day cap in the first hour of any production traffic. The Starter tier (€29/mo for 100K req/mo) is fine for most apps, but a 60-second cache typically deflects 70-90% of calls at zero cost — and matters far less for ex-IP2Location callers who used to pay an annual prepay irrespective of volume.

If you want strict cache-miss bounds, add a per-host concurrency limiter so only one in-flight call per IP is ever issued. Bonus: a single cached response on the new client covers what previously required two DB lookups (DB11 + PX10 for proxy data) on IP2Location, which roughly halves your effective lookup volume on the threat-detection path. Note the structural difference: BIN-file callers used to pay zero per-lookup at runtime (process-local DB), so cache-hit-rate is less of a cost lever and more a latency lever — a cached HTTP response is ~0 ms vs ~5-15 ms over the wire even on warm TCP, which can matter on hot paths.

Step 5 — Shadow mode (the step that builds trust)

Before flipping any user-facing path: call both APIs and compare.

def lookup_country(ip: str) -> str:
    legacy = DB.get_all(ip).country_short
    if SHADOW_MODE:
        try:
            new = _lookup(ip)["country_code"]
            if new != legacy:
                logger.warning("ip2location-shadow-mismatch",
                               extra={"ip": ip, "legacy": legacy, "new": new})
        except Exception as e:
            logger.error("ip2location-shadow-error",
                         extra={"ip": ip, "error": str(e)})
    return legacy

Run shadow mode for 24-48 hours. The mismatch rate on country-level data is typically <0.5% (mostly stale BIN snapshots vs daily-refreshed managed data — IP2Location BIN files are released monthly, so a freshly-deployed BIN file matches our daily-refreshed data closely on day one and drifts as the month progresses). City-level is 1-3%. ASN naming is the noisiest signal — both providers ship the same numeric ASN, but the as (IP2Location) and network.organization (IP Geo API native) field can differ in casing or punctuation ("Google LLC" vs "GOOGLE"). The as field on the shim re-formats to match IP2Location’s casing convention.

The single biggest mismatch class for IP2Location is the proxy / VPN / Tor flag block: the legacy BIN/REST path returns a binary is_proxy plus a proxy_type enum ("VPN" / "TOR" / "DCH" / "PUB" / "WEB" / "SES" / "RES"), and the data is only populated if you have a PX-tier license — otherwise the field is absent or "-". IP Geo API returns four boolean flags inline (is_proxy / is_vpn / is_tor / is_datacenter) on every plan including free. Treat absent-vs-populated as a known-good signal, not a mismatch. For most fraud / analytics rules the binary is_proxy is the only field that matters; pin your match logic to that.

Step 6 — Gradual cutover

Once shadow logs are clean, flip a percentage of traffic via your feature-flag system (LaunchDarkly, Unleash, or a hashed-IP rollout):

import hashlib

def use_ipgeo(ip: str, percent: int) -> bool:
    h = int(hashlib.md5(ip.encode()).hexdigest(), 16)
    return (h % 100) < percent

Recommended ladder: 10% → 50% → 100% over 48 hours. Watch your existing fraud-flag dashboards for unexpected spikes; the bundled threat-flag block exposes signals that an IP2Location DB11 license (without the PX-tier proxy package) did not, so if you wire is_vpn=true into a soft-block rule you may see a 5-15% bump in flagged sessions. This is not a regression — it is the threat data you were paying for separately on the IP2Proxy product line, now bundled inline.

Step 7 — Decommission

Once 100% has been on IP Geo API for >7 days with no incidents:

  1. Cancel both DB licenses in the IP2Location account portal — your geolocation DB (DB1-DB26) and IP2Proxy (PX1-PX11) if you had it. The two product lines bill separately on annual invoices; cancelling DB does not cancel IP2Proxy. Annual prepay is non-refundable for the remainder of the term, so plan the cutover ideally a month before your renewal date. Most teams forget this and end up paying for one more year while running on IP Geo API in parallel.
  2. Drop the BIN-sync cron job. This is usually a wget + checksum + mv + service-reload script in /etc/cron.monthly, plus any matching Ansible / Terraform / Kubernetes ConfigMap that ships the file. Check /var/log/cron, crontab -l, your Ansible roles, and your CI pipelines for any reference to IP2LOCATION- or IP2PROXY- filenames.
  3. Delete the BIN files from /var/lib, your container images, and your S3 / object-storage backups. A typical DB11+PX10 footprint is 1-3 GB per file; container-image bloat is a real win on rebuild time and registry storage cost.
  4. Remove the IP2LOCATION_KEY / IP2LOCATION_BIN_PATH env vars from CI / production / staging.
  5. Cancel the IP2Location Stripe USD recurring invoice (if you were on REST) — most teams forget the duplicate-invoice line until accounting flags it next quarter.
  6. Delete the legacy fallback branch from your code (keep the feature-flag scaffold for the next migration).
  7. Update your DPIA / Article 30 record — processor change from IP2Location (Penang, Malaysia) to corem6 BV (NL/EU). The Article 44/45 transfer-impact-assessment for non-adequacy-country processing of IP visitor data is removed from your record.

The 7 gotchas teams hit in week one

  1. Annual prepay non-refundable. IP2Location’s annual licensing model means cutting over mid-year leaves money on the table. Plan the migration to land 1-2 months before renewal so the account expires naturally rather than overlapping with your new monthly EUR subscription. If you’re already mid-term, the math may favor running both in parallel until renewal.
  2. Two product lines on the invoice, not one. If you used both DB-tier (geolocation) and PX-tier (IP2Proxy), they bill on separate annual invoices. Cancel both, or you’ll renew IP2Proxy automatically a year after cancelling the geolocation DB.
  3. asn string-of-digits vs integer. IP2Location returns "15169" (string of digits, no AS prefix). IP Geo API native returns 15169 (integer); the shim preserves the string format on the asn field but exposes the integer at network.asn. Code that does int(asn) on the legacy field continues to work; code that reads network.asn as a string will break. Pin a unit test on the type before flipping.
  4. time_zone offset-string vs IANA-name. IP2Location returns "-07:00" (raw UTC offset). IP Geo API native returns "America/Los_Angeles" (IANA timezone name); the shim preserves the offset string at time_zone but exposes the IANA name at location.timezone. The IANA name is strictly better — it survives DST transitions and is the canonical input for Intl.DateTimeFormat / pytz / zoneinfo. But code that does string comparisons on "-07:00" will break if you mix paths.
  5. proxy_type enum vs split booleans. IP2Location returns proxy_type: "VPN" (a string-enum that can be "VPN" / "TOR" / "DCH" / "PUB" / "WEB" / "SES" / "RES"). IP Geo API splits these into four booleans (is_vpn / is_tor / is_datacenter / is_proxy). The shim derives the legacy enum string from the booleans (highest-priority match wins), but if you have a switch statement on every enum value, the "PUB" / "WEB" / "SES" / "RES" granular sub-types collapse into the residual is_proxy=true with is_vpn=is_tor=is_datacenter=false. Most fraud rules don’t use the sub-types, but check yours.
  6. No cache layer. Quota burn in 4-6 hours on the free tier (1K/day cap). Add the cache before flipping the flag — especially relevant for ex-BIN-file callers used to “free at runtime”.
  7. Outbound HTTPS blocked. Production VPC egress rules deny api.ipgeo.10b.app. Get firewall change scheduled before cutover. IP2Location’s hostname (api.ip2location.io / api.ip2proxy.io) was likely already allowlisted; the new hostname is not. Same applies to your CSP if you call from the browser.

What you’ll see in week two

Pairing pages

FAQ

How long does a real IP2Location migration take? For a single-stack web app calling the REST API with 1-4 call sites and a working CI: half an engineering day end-to-end. Multi-stack monorepos with BIN-file loaders in 5+ services: 1-2 days, mostly in service-by-service swap-out + cache-layer wiring + cron decommission. The annual-prepay-renewal-window planning is the time sink most teams underestimate, not the field-shape diff — put it on the cutover checklist 60 days ahead.

Will my IP2Location-shaped tests still pass? Yes — the compatibility shim returns the same flat JSON shape for the supported field set, including the country_code / region_name / city_name / time_zone / asn / as triplet that 95% of integrations rely on. For fields outside the shim (domain, iab_category, mcc, mnc, weather_station_code, elevation, address_type, time_zone_name IANA-string variant on PX11), mock the new client path or move that logic to a dedicated reference-data source.

What about the BIN/CSV/MMDB files I’m running locally? Replace the local-lookup call with an HTTP GET to IP Geo API. Cache hot IPs in Redis or equivalent for p95 latency. The migration is conceptually simpler than the REST-to-REST swap because the BIN files have a more constrained API surface — the official libraries all expose get_all(ip) or equivalent, and you can wrap the new HTTP client in a function with the same signature. The trade-off: you lose process-local sub-millisecond latency and gain ~5-15 ms of network latency per uncached lookup. If that delta breaks an SLA, keep IP2Location for that specific path and migrate everything else (hybrid pattern). For >99% of web apps, dashboards, and SaaS backends, the latency delta is invisible compared to your existing per-request budget.

What about IP2Proxy specifically? IP2Proxy is consolidated into the bundled threat block (is_proxy, is_vpn, is_tor, is_datacenter) on every IP Geo API response, including free tier. If you had a PX-tier license, the migration removes one annual invoice and one DB-sync cycle. The proxy_type string-enum ("VPN" / "TOR" / "DCH" / "PUB" / "WEB" / "SES" / "RES") is approximated by the four booleans; the granular "PUB" / "WEB" / "SES" / "RES" sub-types collapse into the residual is_proxy=true case. If you use those sub-types in fraud-rule branching, audit those rules before flipping.

What’s the rollback story if something goes wrong? The feature flag gives you a 1-second flip back to IP2Location. Keep the IP2Location integration working for at least 30 days post-cutover; if you’re on annual prepay you’ve already paid for the remainder of the term, so leaving it as instant-fallback insurance costs you nothing. The IP2Proxy SKU you can leave running too for the same reason — sunk cost, may as well use it as belt-and-suspenders.

Can I migrate one service at a time? Yes — and it’s the recommended approach. Each call site is independent. Migrate the lowest-risk one first (often a dashboard analytics path or a server-side log enrichment job), measure for a week, then move to the next. There is no all-or-nothing requirement.

Do you support a /bulk endpoint like IP2Location’s BIN local-lookup? IP2Location does not ship a REST /bulk endpoint — bulk users typically use the local DB file. We support a JSON POST to /v1/bulk with up to 100 IPs per call (paginate for larger batches). The response is a flat array; the per-IP response shape is identical to the single-lookup ?format=ip2location response. This is one of the biggest workflow improvements for ex-BIN-file batch consumers — no DB sync, no process restart, no version skew, just an HTTP POST.

What if I was on the IP2Location LITE free tier? Then the migration math shifts away from cost (both LITE and our free 1K-req/day are free) toward feature gain: LITE comes with a CC-BY-SA attribution requirement and ships only the country-level DB on a monthly refresh; our free tier ships full city + ASN + threat-block on a daily refresh with no attribution required. Side-project teams that “just need geo + light bot detection” usually find the migration is a net feature gain at zero cost change.

Why does IP2Location split DB and PX product lines at all? Historically the proxy/VPN data sources were licensed and updated separately, and IP2Location passed through that packaging. Our pricing posture is “threat is a baseline expectation in 2026, not an upsell” — we vertically integrated the threat data into one quota, one invoice, one response. That difference in posture is the single biggest reason teams hit this migration guide.

Related migration & comparison reading

Industry deep-dives


Last reviewed 2026-05-10 · IP Geo API team · Comments / corrections: hello@ipgeo.10b.app

Pairs with the full IP2Location alternative comparison page and the head-on IP Geo API vs IP2Location review.


Get early access — 50% off for 12 months

First 100 signups lock in 50% off any paid plan for the first year. No credit card required — we’ll email you at launch.