How to Migrate from DB-IP to IP Geo API in 2026: A Step-by-Step Drop-In Guide

7-minute read · 2026 code samples · honest rollback plan

This is the practical companion to the DB-IP alternative comparison → and the head-on review of IP Geo API vs DB-IP →. Those two pages tell you whether to switch. This page tells you how — including the three packaging and field-shape gotchas no other migration guide is honest about.

TL;DR — most DB-IP → IP Geo API migrations land in half an engineering day for REST callers, and roughly one full day for teams currently consuming the MMDB / CSV downloadable database files. The real work is not the swap itself; it is decommissioning the MMDB-sync cycle, scrubbing the CC-BY 4.0 attribution backlink off public-facing surfaces (a contractual obligation on the free tier most teams forget when they upgrade), unbundling the IP-to-Threat / Anonymous / Datacenter separately-licensed product lines into a single inline threat block, and converting USD monthly billing to EUR monthly with iDEAL / SEPA / Bancontact at checkout.

Who this guide is for

You currently use DB-IP in one of three shapes:

…and you’ve decided that the MMDB-sync cycle, the multi-product-line invoicing, USD-only billing, the CC-BY 4.0 attribution clause on the free tier, and the global-CDN-edge posture (DB-IP is Brussels-headquartered but the public REST API is fronted by globally distributed CDN edges) cost more than they should. You want a REST replacement that:

If those six boxes are unchecked — pause and read the vs comparison → first. The tradeoffs are real, especially if you actively need offline / air-gapped lookups for on-prem deployments, sub-millisecond local-process latency at multi-million-lookup-per-second scale, or Extended-tier fields like weatherStationCode, linkedSites, addressType, or the granular connectionType enum we don’t expose at the same depth.

The 7-step migration checklist

  1. Inventory every call site that hits api.db-ip.com, OR loads a .mmdb / .csv DB-IP file.
  2. Map your fields to the DB-IP-compatibility response (?format=db-ip).
  3. Add a feature flag so you can switch any call site between providers.
  4. Wire a 60-second cache in front of the API client (in-memory or Redis).
  5. Deploy in shadow mode — call both, log differences, serve DB-IP responses.
  6. Cut over gradually — 10% → 50% → 100% of traffic over 48 hours.
  7. Decommission — cancel the threat / anonymous / datacenter add-ons, scrub the attribution backlink, archive USD invoice, drop the MMDB-sync cron.

The rest of this post walks each step with copy-paste code.

Step 1 — Inventory call sites

Run this in the repo root before touching anything:

git grep -nE "db-ip\\.com|dbip|DB_IP|DBIP|\\.mmdb|maxminddb|DBIP_KEY|db_ip_key" -- ':!*.lock' ':!*.md'

Most teams find 1-6 call sites: one for the main REST /v2/{key}/{ip} lookup, optionally one or more for the IP-to-Threat / Anonymous / Datacenter companion REST endpoints, plus any number of MMDB-file loaders (maxminddb.open_database("/var/lib/dbip-city-lite-2026-05.mmdb"), Reader.Open("/data/dbip-asn-lite-2026-05.mmdb")). The MMDB-loader paths are the higher-leverage swap target — those are the call sites that today require a daily or monthly DB pull, multi-GB redeploy, and version-skew checks across every app server. Audit each one; teams typically find one nightly or monthly cron job that nobody owns refreshing the file.

Watch-out: the MMDB loaders are usually process-local singletons instantiated at app startup. Hot-reload semantics differ across the official libraries (maxminddb-python re-mmaps on file replace if you call close() first; maxminddb-golang requires explicit Reader.Close() + maxminddb.Open() to pick up a new file; node-maxmind requires a process restart). The new HTTP client is per-request stateless, so the migration eliminates this entire class of “is the lookup table fresh on every pod after deploy?” questions.

Watch-out #2: scan also for the CC-BY 4.0 attribution snippet on public-facing surfaces. The DB-IP free tier’s attribution clause requires a visible link back to db-ip.com on any page that displays geolocation data. Teams that started on Lite and never upgraded carry this snippet around for years; teams that did upgrade often forget to scrub it. Search your templates / React components / CMS for db-ip.com and IP geolocation by DB-IP strings before cutover.

Step 2 — Map the fields

DB-IP’s REST API returns a flat JSON shape with sub-tier fields gated by your tier:

{
  "ipAddress": "8.8.8.8",
  "continentCode": "NA",
  "continentName": "North America",
  "countryCode": "US",
  "countryCode3": "USA",
  "countryName": "United States",
  "stateProv": "California",
  "stateProvCode": "CA",
  "city": "Mountain View",
  "district": "",
  "zipCode": "94043",
  "geonameId": 5375480,
  "latitude": 37.4056,
  "longitude": -122.0775,
  "timeZone": "-07:00",
  "timeZoneName": "America/Los_Angeles",
  "weatherCode": "USCA0746",
  "asNumber": 15169,
  "asName": "Google LLC",
  "isp": "Google LLC",
  "organization": "Google LLC",
  "connectionType": "Corporate",
  "addressType": "Unicast",
  "linkedSites": "google.com",
  "languages": "en-US,es-US,haw,fr"
}

…and the MMDB / CSV local-lookup shape returns nested records (MaxMind-compatible nested structure: country, city, subdivisions, location, traits). The dataset tier determines which fields are populated: Lite = country only on the free tier; Core = city + ASN; Extended = +ISP + connection-type + linkedSites + languages; Full = all of the above plus IP-to-Threat / Anonymous / Datacenter as separately-licensed add-on datasets.

IP Geo API ships an ?format=db-ip compatibility shim that returns the same flat camelCase shape so most call sites stop noticing the swap. The mapping for the fields ~95% of integrations rely on:

Your old code DB-IP REST / MMDB field IP Geo API ?format=db-ip Native ?format=ipgeo
IP ipAddress ipAddress ip
Country code (ISO-2) countryCode (REST) / country.iso_code (MMDB) countryCode country.iso_code
Country code (ISO-3) countryCode3 (REST only) countryCode3 derived
Country name countryName countryName country.name
Region (state/province) stateProv stateProv region.name
Region code stateProvCode stateProvCode region.iso_code
City city city location.city
Postal zipCode zipCode location.postal_code
Lat latitude (number) latitude (number) location.lat
Lng longitude (number) longitude (number) location.lng
Time zone (offset) timeZone (e.g., "-07:00") timeZone derived
Time zone (IANA) timeZoneName (e.g., "America/Los_Angeles") timeZoneName location.timezone
ASN asNumber (integer) asNumber (integer) network.asn
ASN name asName asName network.organization
ISP isp isp network.organization (collapsed)
Continent continentCode / continentName continentCode / continentName country.continent

The native ?format=ipgeo shape uses snake_case nested objects (closer to MaxMind GeoIP2’s structure than DB-IP’s flat camelCase). Either format works — the shim is the path of least diff for ex-DB-IP code; the native format is cleaner for greenfield code.

Step 3 — Drop in the new client (with feature flag)

Python (was maxminddb MMDB loader on a dbip-city-lite-*.mmdb file)

# before — DB-IP MMDB local lookup
import maxminddb
DB = maxminddb.open_database("/var/lib/dbip-city-lite-2026-05.mmdb")

def lookup_country(ip: str) -> str:
    rec = DB.get(ip)
    return rec["country"]["iso_code"] if rec else None

# after — drop-in, feature-flagged, with cache
import os
from functools import lru_cache
import requests

API_KEY = os.environ["IPGEO_API_KEY"]
USE_IPGEO = os.environ.get("USE_IPGEO_API", "0") == "1"   # feature flag

@lru_cache(maxsize=10_000)
def _lookup(ip: str) -> dict:
    r = requests.get(
        f"https://api.ipgeo.10b.app/v1/{ip}",
        headers={"Authorization": f"Bearer {API_KEY}"},
        params={"format": "db-ip"},
        timeout=2.0,
    )
    r.raise_for_status()
    return r.json()

def lookup_country(ip: str) -> str:
    if USE_IPGEO:
        return _lookup(ip)["countryCode"]      # flat shape — no rewrite
    rec = DB.get(ip)
    return rec["country"]["iso_code"] if rec else None

Two structural deltas on the migration: (a) the MMDB file is gone — no more multi-GB artefact in /var/lib, no more daily refresh cron, no more DBIP_MMDB_PATH env var, no more du -sh discipline; (b) the API key now lives in an Authorization: Bearer … header that does not appear in URL logs or browser history (DB-IP REST also used /v2/{key}/{ip} path-segment auth; if you were on the REST product before, treat the existing key as already-leaked across nginx, Cloudflare, APM, and Sentry logs).

Node / TypeScript (was node-maxmind on a DB-IP MMDB)

// before
import { Reader } from "node-maxmind";
const reader = await Reader.open("/var/lib/dbip-city-lite-2026-05.mmdb");

const rec = reader.get(ip);
const country = rec?.country?.iso_code;

// after — drop-in
const cache = new Map<string, any>();
export async function geoLookup(ip: string) {
  if (process.env.USE_IPGEO_API !== "1") {
    return reader.get(ip);   // legacy MMDB path
  }
  if (cache.has(ip)) return cache.get(ip);
  const r = await fetch(
    `https://api.ipgeo.10b.app/v1/${ip}?format=db-ip`,
    { headers: { Authorization: `Bearer ${process.env.IPGEO_API_KEY!}` } }
  );
  if (!r.ok) throw new Error(`ipgeo ${r.status}`);
  const j = await r.json();
  cache.set(ip, j);
  setTimeout(() => cache.delete(ip), 60_000);   // 60-s TTL
  return j;
}

Go

// after — drop-in via the db-ip-compatibility shim
url := fmt.Sprintf("https://api.ipgeo.10b.app/v1/%s?format=db-ip", ip)
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
req.Header.Set("Authorization", "Bearer "+os.Getenv("IPGEO_API_KEY"))
resp, err := httpClient.Do(req)
// ... unmarshal into your existing DB-IP-shaped struct

Step 4 — Cache layer (the step everyone skips)

A naive 1-call-per-request integration will burn through IP Geo API’s free 1K-req/day cap in the first hour of any production traffic. The Starter tier (€29/mo for 100K req/mo) is fine for most apps, but a 60-second cache typically deflects 70-90% of calls at zero cost — and matters far less for ex-MMDB-file callers who paid zero per-lookup at runtime under DB-IP.

If you want strict cache-miss bounds, add a per-host concurrency limiter so only one in-flight call per IP is ever issued. Bonus: a single cached response on the new client covers what previously required three DB lookups (city + ASN + IP-to-Threat / Anonymous if you were on the multi-product Full tier) on DB-IP, which roughly thirds your effective lookup volume on the threat-detection path. Note the structural difference: MMDB-file callers used to pay zero per-lookup at runtime (process-local DB), so cache-hit-rate is less of a cost lever and more a latency lever — a cached HTTP response is ~0 ms vs ~5-15 ms over the wire even on warm TCP, which can matter on hot paths.

Step 5 — Shadow mode (the step that builds trust)

Before flipping any user-facing path: call both APIs and compare.

def lookup_country(ip: str) -> str:
    legacy = (DB.get(ip) or {}).get("country", {}).get("iso_code")
    if SHADOW_MODE:
        try:
            new = _lookup(ip)["countryCode"]
            if new != legacy:
                logger.warning("dbip-shadow-mismatch",
                               extra={"ip": ip, "legacy": legacy, "new": new})
        except Exception as e:
            logger.error("dbip-shadow-error",
                         extra={"ip": ip, "error": str(e)})
    return legacy

Run shadow mode for 24-48 hours. The mismatch rate on country-level data is typically <0.5% (mostly stale MMDB snapshots vs daily-refreshed managed data — DB-IP Lite is monthly-released, Core is daily, so a freshly-deployed Lite file matches our daily-refreshed data closely on day one and drifts as the month progresses). City-level is 1-3%. ASN naming is the noisiest signal — both providers ship the same numeric ASN, but the asName (DB-IP) and network.organization (IP Geo API native) field can differ in casing or punctuation ("Google LLC" vs "GOOGLE"). The asName field on the shim re-formats to match DB-IP’s casing convention.

The single biggest mismatch class for DB-IP is the threat / anonymous / datacenter flag block: the legacy REST path returned this data as a separate response from a separately-licensed product (api.db-ip.com/threat/v1/{key}/{ip}, api.db-ip.com/anonymous/v1/{key}/{ip}, api.db-ip.com/datacenter/v1/{key}/{ip}), and the data is only populated if you have those add-on subscriptions — otherwise the field is absent. IP Geo API returns five boolean flags inline (is_proxy / is_vpn / is_tor / is_datacenter / is_residential) on every plan including free. Treat absent-vs-populated as a known-good signal, not a mismatch. For most fraud / analytics rules the binary is_proxy is the only field that matters; pin your match logic to that.

Step 6 — Gradual cutover

Once shadow logs are clean, flip a percentage of traffic via your feature-flag system (LaunchDarkly, Unleash, or a hashed-IP rollout):

import hashlib

def use_ipgeo(ip: str, percent: int) -> bool:
    h = int(hashlib.md5(ip.encode()).hexdigest(), 16)
    return (h % 100) < percent

Recommended ladder: 10% → 50% → 100% over 48 hours. Watch your existing fraud-flag dashboards for unexpected spikes; the bundled threat-flag block exposes signals that a DB-IP Core license (without the IP-to-Threat or IP-to-Anonymous add-on) did not, so if you wire is_vpn=true into a soft-block rule you may see a 5-15% bump in flagged sessions. This is not a regression — it is the threat data you were paying for separately on the IP-to-Threat / Anonymous product lines, now bundled inline.

Step 7 — Decommission

Once 100% has been on IP Geo API for >7 days with no incidents:

  1. Cancel each separately-licensed add-on in the DB-IP account portal — your geolocation product (Lite / Core / Extended / Full) and IP-to-Threat and IP-to-Anonymous and IP-to-Datacenter if you had them. The product lines bill separately on monthly invoices; cancelling one does not cancel the others. Most teams forget this and end up paying for one to three more months while running on IP Geo API in parallel.
  2. Scrub the CC-BY 4.0 attribution backlink off public-facing surfaces. Search for db-ip.com and IP geolocation by DB-IP strings across your templates / React / Vue components / CMS / docs. The Lite-tier free-DB and free-REST-tier both contractually require a visible attribution link; once you’ve migrated, that obligation is gone — but it does not auto-remove itself. This is the single biggest “we technically did the migration but forgot a step” gotcha on DB-IP exits.
  3. Drop the MMDB-sync cron job. This is usually a wget + checksum + mv + service-reload script in /etc/cron.daily or /etc/cron.monthly, plus any matching Ansible / Terraform / Kubernetes ConfigMap that ships the file. Check /var/log/cron, crontab -l, your Ansible roles, and your CI pipelines for any reference to dbip- filenames.
  4. Delete the MMDB files from /var/lib, your container images, and your S3 / object-storage backups. A typical Core+Threat footprint is 200 MB–1 GB per file; container-image bloat is a real win on rebuild time and registry storage cost.
  5. Remove the DBIP_KEY / DBIP_MMDB_PATH env vars from CI / production / staging.
  6. Cancel the DB-IP Stripe USD recurring invoice — most teams forget the duplicate-invoice line until accounting flags it next quarter.
  7. Delete the legacy fallback branch from your code (keep the feature-flag scaffold for the next migration).
  8. Update your DPIA / Article 30 record — processor change from DB-IP (Brussels HQ + global CDN edges) to corem6 BV (NL/EU, EU-only edges). The “vendor is EU but edges are global” footnote on your previous Article 30 row is removed.

The 7 gotchas teams hit in week one

  1. Attribution backlink left in place. The CC-BY 4.0 obligation is gone the day you upgrade past Lite, but the snippet in your templates is not. Audit before flipping the flag — you don’t want to be advertising your old vendor’s brand in production a month after the cutover. The legal exposure on the Lite tier specifically is small, but the brand-leakage exposure compounds with every new public page that inherits the template.
  2. Multiple product lines on the invoice, not one. If you used DB-IP geolocation + IP-to-Threat + IP-to-Anonymous + IP-to-Datacenter, those bill on separate monthly invoices on potentially different renewal dates. Cancel each, or you’ll keep one running for months past the migration.
  3. countryCode3 ISO-3 vs ISO-2. DB-IP returns both countryCode (ISO-2, 2-letter) and countryCode3 (ISO-3, 3-letter) flat at the top level of the REST response. The shim preserves both. The native format only emits ISO-2 at country.iso_code; ISO-3 has to be derived (most teams have a small lookup map already, since it’s a stable list of ~250 entries). Code that reads countryCode3 directly will break under the native format — use the shim, or wire a derive-on-read at the edge.
  4. timeZone offset-string vs IANA-name. DB-IP returns timeZone: "-07:00" (raw UTC offset) AND timeZoneName: "America/Los_Angeles" (IANA zone name) — both populated. IP Geo API native returns only the IANA name at location.timezone; the shim preserves both fields. The IANA name is strictly better for any code that needs DST-correct date arithmetic; the offset-string is fine only for display. Audit which downstream code consumes timeZone vs timeZoneName before flipping.
  5. connectionType / addressType enum collapse. DB-IP exposes a granular connectionType enum ("Corporate", "Cellular", "Cable/DSL", "Dialup", "Satellite") and addressType ("Unicast", "Anycast", "Multicast") on the Extended / Full tiers. The shim returns these strings unchanged; the native format only exposes a coarser is_residential / is_datacenter boolean pair. If your fraud rules branch on connectionType == "Cellular" specifically, audit before flipping. Most teams find that the binary is_residential flag covers the cellular-vs-cable cases they actually rule on.
  6. No cache layer. Quota burn in 4-6 hours on the free tier (1K/day cap). Add the cache before flipping the flag — especially relevant for ex-MMDB-file callers used to “free at runtime”.
  7. Outbound HTTPS blocked. Production VPC egress rules deny api.ipgeo.10b.app. Get firewall change scheduled before cutover. DB-IP’s hostname (api.db-ip.com) was likely already allowlisted; the new hostname is not. Same applies to your CSP if you call from the browser.

What you’ll see in week two

Pairing pages

FAQ

How long does a real DB-IP migration take? For a single-stack web app calling the REST API with 1-4 call sites and a working CI: half an engineering day end-to-end. Multi-stack monorepos with MMDB-file loaders in 5+ services: 1-2 days, mostly in service-by-service swap-out + cache-layer wiring + cron decommission + attribution-snippet scrub. The attribution-backlink scrub is the time sink most teams underestimate, not the field-shape diff — put it on the cutover checklist 7 days ahead.

Will my DB-IP-shaped tests still pass? Yes — the compatibility shim returns the same flat camelCase JSON shape for the supported field set, including the countryCode / countryCode3 / stateProv / stateProvCode / city / zipCode / timeZone / timeZoneName / asNumber / asName triplet that 95% of integrations rely on. For fields outside the shim (weatherCode, linkedSites, addressType enum, granular connectionType enum), mock the new client path or move that logic to a dedicated reference-data source.

What about the MMDB / CSV files I’m running locally? Replace the local-lookup call with an HTTP GET to IP Geo API. Cache hot IPs in Redis or equivalent for p95 latency. The migration is conceptually simpler than the REST-to-REST swap because the MMDB files have a more constrained API surface — the official maxminddb-compatible libraries all expose get(ip) or equivalent, and you can wrap the new HTTP client in a function with the same signature. The trade-off: you lose process-local sub-millisecond latency and gain ~5-15 ms of network latency per uncached lookup. If that delta breaks an SLA, keep DB-IP for that specific path and migrate everything else (hybrid pattern). For >99% of web apps, dashboards, and SaaS backends, the latency delta is invisible compared to your existing per-request budget.

What about the IP-to-Threat / IP-to-Anonymous / IP-to-Datacenter add-ons specifically? Those product lines are consolidated into the bundled threat block (is_proxy, is_vpn, is_tor, is_datacenter, is_residential) on every IP Geo API response, including free tier. If you had any of those add-ons, the migration removes one to three monthly invoices and one to three sync cycles. The connectionType / addressType granular enums approximate to the is_residential / is_datacenter booleans; if you use those granular enums in fraud-rule branching, audit those rules before flipping.

Do I have to scrub the CC-BY 4.0 attribution snippet immediately? Not strictly — the obligation is gone the moment you stop calling DB-IP, not the moment you publish a snippet-scrub PR. But the longer the snippet stays, the longer your public surfaces advertise your old vendor’s brand, and the harder it becomes to track down every template / component / CMS field that inherits it. Best to do it on the cutover-day PR, not “next sprint”.

What’s the rollback story if something goes wrong? The feature flag gives you a 1-second flip back to DB-IP. Keep the DB-IP integration working for at least 30 days post-cutover; if you’re on monthly billing the marginal cost is one month’s invoice, which is cheap insurance. The IP-to-Threat / Anonymous / Datacenter add-ons you can leave running too for the same reason — sunk monthly cost, may as well use as belt-and-suspenders.

Can I migrate one service at a time? Yes — and it’s the recommended approach. Each call site is independent. Migrate the lowest-risk one first (often a dashboard analytics path or a server-side log enrichment job), measure for a week, then move to the next. There is no all-or-nothing requirement.

Do you support a /bulk endpoint like DB-IP’s MMDB local-lookup? DB-IP does not ship a high-throughput REST /bulk endpoint — bulk users typically use the local DB file. We support a JSON POST to /v1/bulk with up to 100 IPs per call (paginate for larger batches). The response is a flat array; the per-IP response shape is identical to the single-lookup ?format=db-ip response. This is one of the biggest workflow improvements for ex-MMDB-file batch consumers — no DB sync, no process restart, no version skew, just an HTTP POST.

What if I was on the DB-IP Lite free tier? Then the migration math shifts away from cost (both DB-IP Lite and our free 1K-req/day are free) toward feature gain: Lite comes with a CC-BY 4.0 attribution requirement and ships only the country-level / city-level (depending on which Lite product) on a monthly refresh; our free tier ships full city + ASN + threat-block on a daily refresh with no attribution required. Side-project teams that “just need geo + light bot detection” usually find the migration is a net feature gain at zero cost change.

Why does DB-IP split geolocation from threat data at all? Historically the proxy/VPN/datacenter data sources were licensed and updated separately, and DB-IP passed through that packaging. Our pricing posture is “threat is a baseline expectation in 2026, not an upsell” — we vertically integrated the threat data into one quota, one invoice, one response. That difference in posture is the single biggest reason teams hit this migration guide.

Related migration & comparison reading

Industry deep-dives


Last reviewed 2026-05-10 · IP Geo API team · Comments / corrections: hello@ipgeo.10b.app

Pairs with the full DB-IP alternative comparison page and the head-on IP Geo API vs DB-IP review.


Get early access — 50% off for 12 months

First 100 signups lock in 50% off any paid plan for the first year. No credit card required — we’ll email you at launch.