Skip to content

Populating NetBox from Real Infrastructure with ToolMesh

A typical scenario: three cloud providers, two hypervisors, a handful of edge sites. Somewhere around 200 devices and VMs, give or take. You spin up NetBox, stare at the empty dashboard, and realize the real work hasn’t started. Writing import scripts for five different APIs is a week of work you don’t want to do — so NetBox sits empty, and the spreadsheet wins again.

This post shows how to skip that week. We walk through using ToolMesh to pull live infrastructure data from multiple sources and push it into NetBox via the NetBox DADL connector. For the business case and broader “why,” read the companion post on dunkel.cloud. Here we focus on the how.

The data flow:

Hetzner Cloud ───┐
Linode ───┤
Xen Orchestra ───┼── ToolMesh ── NetBox DADL ── NetBox API
Unifi ───┤
Tailscale ───┘

Each source system has its own API, its own authentication, and its own data model. ToolMesh acts as the gateway: it authenticates against each source using stored credentials, normalizes the responses, and writes the results into NetBox through the NetBox DADL definition.

DADL (Declarative API Definition Language) is a YAML-based format that describes REST APIs as tool definitions. Instead of writing a custom MCP server wrapper for each API, you describe the endpoints, parameters, and authentication in one file. ToolMesh turns that description into callable tools at runtime — handling credential injection, authorization, and audit logging automatically. The NetBox DADL covers most of the NetBox REST API; each source DADL covers one cloud or appliance. No custom glue code sits between them.

The core challenge is translating between data models. A Hetzner “server” becomes a NetBox “device.” A Linode “instance” also becomes a device, but the field names differ. Here is how the main objects map:

SourceSource ObjectNetBox ObjectKey Fields
Hetzner CloudServerDevicename, server_type → device_type, datacenter → site
Hetzner CloudNetworkPrefixip_range → prefix, name → description
LinodeInstanceDevicelabel → name, type → device_type, region → site
LinodeIPv4/IPv6IP Addressaddress → address, linode_id → assigned device
Xen OrchestraVMVirtual Machinename_label → name, power_state → status
Xen OrchestraNetworkVLANname_label → name, MTU
UnifiDeviceDevicename, model → device_type, site → site
TailscaleNodeDevicehostname → name, addresses → IP Addresses

Concretely, a single Hetzner server comes back looking like this:

{
"id": 42891337,
"name": "web-fra-01",
"server_type": { "name": "cx22", "cores": 2, "memory": 4 },
"datacenter": { "name": "fsn1-dc14", "location": { "name": "fsn1" } },
"public_net": { "ipv4": { "ip": "95.217.xx.xx" } },
"status": "running"
}

After ToolMesh runs it through the NetBox DADL, that single source object produces three linked NetBox records: a Site (fsn1), a DeviceType (cx22, 2 cores, 4 GB), and a Device (web-fra-01) with a primary IP attached. The Site and DeviceType are created once and reused on every subsequent Hetzner server in the same datacenter.

Mappings are not hardcoded. The field translation logic lives in the orchestration layer between the source DADL and the NetBox DADL. You can override mappings, add custom fields, or skip objects entirely — if your servers encode a role in the hostname (web-fra-01, db-fra-02), a small transform sets device_role accordingly. Dependencies are resolved automatically: prerequisite objects get created before the main record.

Blindly syncing cloud inventory into a production NetBox is a reasonable thing to be nervous about. The approach we recommend is graduated trust — start with zero risk and escalate as confidence grows. None of this is a bespoke CLI; it all runs as natural-language prompts to Claude (or any agent) with ToolMesh connected. The examples below are the actual prompts.

Phase 1: Initial Import (empty database). Before any devices can be created, NetBox needs a minimal scaffold: at least one Site (devices must belong to one), a Manufacturer for device types, and ideally a Tenant for organizational grouping. Give Claude the facts about your infrastructure and let it set things up:

Set up NetBox for my company. Our tenant is “Acme GmbH”. I run infrastructure in two regions: Linode Frankfurt (eu-central) and Linode Amsterdam (nl-ams) — create those as Sites under a region called “Europe”. The Manufacturer for all Linode instances is “Akamai/Linode”. Once that’s in place, import all my Linode instances as Devices, create the matching Device Types (g6-standard-2, g6-standard-4, etc.) on the fly, and attach their public IPv4 and IPv6 addresses. NetBox is empty, so no dry-run needed — just do it and show me a summary when done.

Claude calls the Linode tools to list instances, the NetBox tools to create sites, tenant, manufacturer, device types, devices, and IPs in the right order. The summary at the end tells you how many of each were created.

Phase 2: Read-only Diff. Once NetBox has data, the next check asks what would change without touching anything:

Compare my current Linode instances with what’s in NetBox. Don’t change anything — just tell me the differences in plain English.

The response is prose, not a machine diff:

I compared 23 Linode instances against the 22 devices in NetBox tagged source=linode. Four differences:

  1. New instance: db-ams-04 exists at Linode (region nl-ams, type g6-standard-4, created yesterday) but isn’t in NetBox yet.
  2. IP change: web-ams-01 has a new primary IPv4 in NetBox (172.104.x.12) but Linode now reports 172.104.x.47. Looks like it was rebuilt.
  3. Status drift: worker-ams-03 is marked active in NetBox, but Linode reports it as offline (powered off 6 days ago).
  4. Missing at source: legacy-bastion-ams exists in NetBox but is no longer at Linode. Probably decommissioned — worth confirming before I touch it.

Want me to apply 1–3 and flag 4 for your review?

This is where most of the trust gets built. You see exactly what would happen, in language you can audit at a glance.

Phase 3: Reconcile with Confirmation. Apply the changes with a human in the loop on anything destructive:

Go ahead with 1, 2, and 3. For anything that deletes or marks a device as decommissioned, ask me first.

Creates and updates happen straight away; the legacy bastion triggers a confirmation prompt (“Delete legacy-bastion-ams from NetBox, or mark it decommissioning and keep the record? [delete / decommission / skip]”) before anything irreversible happens. Destructive operations always require explicit confirmation, even once the workflow is automated.

Phase 4: Scheduled Sync. Once the mapping is trustworthy, hand the workflow to a scheduled agent:

Every hour, pull the current Linode state and reconcile it with NetBox. Apply additions and field updates automatically. For anything that would delete a device or change its status to decommissioned, open a ticket in our tracker instead of acting — I’ll review those manually.

ToolMesh runs the prompt on the schedule, the agent does the same work it did interactively, and destructive actions stay gated on human review. New servers appear in NetBox minutes after provisioning. Every tool call lands in the audit log, so you can always ask later what got changed, when, and why.

The NetBox DADL connector covers most of the NetBox 4.x REST API. Source connectors for Hetzner Cloud, Linode, Xen Orchestra, Unifi, and Tailscale are available today. If you run infrastructure that is not covered yet, a new DADL file takes minutes, not days — check the DADL registry for existing definitions or contribute on GitHub.

The real payoff lands after the import. A populated NetBox is not just documentation — it becomes the substrate an AI agent can reason over. “Which servers in Falkenstein are running kernel 5.x and need a reboot window next Tuesday?” is a NetBox query plus a ToolMesh tool call, not a week of shell scripts. That is why we started here: NetBox is the map, ToolMesh is how agents read and redraw it safely.

For the strategic angle, read the full analysis on dunkel.cloud.