Stop Rebuilding REST API Wrappers for MCP
Every REST-to-MCP wrapper starts the same way. You pick an API, open the docs, and begin writing glue code. Authentication, parameter mapping, error handling, pagination, retries, response shaping. Two hundred lines later, you have one integration. Then you do it again for the next API.
The real cost is not the 200 lines. It is reimplementing execution policy — credential handling, access control, audit logging — around every single wrapper. That work is expensive, repetitive, and difficult to scale.
DADL takes a different approach: instead of writing a custom wrapper for every REST API, you describe the API declaratively. ToolMesh turns that description into tools and handles runtime concerns — including the ones most wrappers quietly ignore: credential isolation, authorization, and audit logging.
The wrapper you keep rebuilding
Abschnitt betitelt „The wrapper you keep rebuilding“Here is a minimal MCP server for a single DeepL translation endpoint in TypeScript. Stripped down — no tests, no retries, no pagination, no rate limiting:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";import { CallToolRequestSchema, ListToolsRequestSchema,} from "@modelcontextprotocol/sdk/types.js";
const API_KEY = process.env.DEEPL_API_KEY;if (!API_KEY) throw new Error("DEEPL_API_KEY required");
const server = new Server( { name: "deepl-mcp", version: "1.0.0" }, { capabilities: { tools: {} } });
server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [ { name: "translate", description: "Translate text into a target language. " + "Supports up to 50 texts per request.", inputSchema: { type: "object", properties: { text: { type: "array", items: { type: "string" }, description: "Array of texts to translate", }, target_lang: { type: "string", description: "Target language code (EN-US, DE, FR ...)", }, source_lang: { type: "string", description: "Source language code. Omit for auto-detect", }, formality: { type: "string", enum: [ "default", "more", "less", "prefer_more", "prefer_less", ], }, }, required: ["text", "target_lang"], }, }, ],}));
server.setRequestHandler(CallToolRequestSchema, async (request) => { if (request.params.name !== "translate") { throw new Error(`Unknown tool: ${request.params.name}`); }
const { text, target_lang, source_lang, formality } = request.params.arguments;
const body = { text, target_lang }; if (source_lang) body.source_lang = source_lang; if (formality) body.formality = formality;
const res = await fetch("https://api.deepl.com/v2/translate", { method: "POST", headers: { Authorization: `DeepL-Auth-Key ${API_KEY}`, "Content-Type": "application/json", }, body: JSON.stringify(body), });
if (!res.ok) { const err = await res.text(); return { content: [{ type: "text", text: `DeepL error ${res.status}: ${err}` }], isError: true, }; }
const data = await res.json(); return { content: [{ type: "text", text: JSON.stringify(data, null, 2) }], };});
const transport = new StdioServerTransport();await server.connect(transport);That is 90 lines for a single endpoint — already stripped to the minimum. No retries. No rate limit handling. No pagination. No structured error mapping. And the API key lives in an environment variable, where credentials are often distributed across client and server configs without centralized governance.
A production-grade wrapper with multiple endpoints, proper error handling, retry logic, and rate limiting easily grows past 200 lines — plus a package.json, a build step, and a process to keep running.
Now multiply that by every API you want to connect.
The same integration in DADL
Abschnitt betitelt „The same integration in DADL“Here is the same DeepL translation tool described in DADL:
backend: name: deepl type: rest base_url: https://api.deepl.com/v2 description: "DeepL translation API"
auth: # ← credential injection handled by ToolMesh type: bearer credential: deepl_auth_key # references a server-side secret, never exposed to the model prefix: "DeepL-Auth-Key "
defaults: errors: # ← retry and error handling for all tools retry_on: [429, 502, 503, 529] retry_strategy: max_retries: 3 backoff: exponential initial_delay: 1s
tools: translate: method: POST path: /translate access: write # ← access classification for authorization description: "Translate text into a target language" params: # ← parameter mapping: type, location, validation text: type: array in: body required: true description: "Array of texts to translate" target_lang: type: string in: body required: true description: "Target language code (EN-US, DE, FR)" source_lang: type: string in: body description: "Source language. Omit for auto-detect" formality: type: string in: body description: "default, more, less, prefer_more, prefer_less"40 lines of YAML. No code. No build step. No custom server runtime to build and operate.
But shorter is not the point. The point is what happens at runtime.
The control layer you are not building
Abschnitt betitelt „The control layer you are not building“Most hand-rolled wrappers focus on one thing: making an API callable. They get the request out the door and the response back. That is necessary, but it is not sufficient for production.
The harder questions are: Who is allowed to call this tool? Where do the credentials live? What happens to sensitive data in the response? Is there an audit trail?
With a custom wrapper, those concerns either get bolted on after the fact or quietly ignored. With DADL and ToolMesh, they are the default:
| Concern | Custom MCP wrapper | DADL + ToolMesh |
|---|---|---|
| Credentials | In client config, visible to the model | Injected server-side at runtime, never exposed |
| Authorization | Typically none — full access for everyone | Per-tool, per-user access control |
| Audit logging | Not built in | Every call logged and queryable |
| Output filtering | Not built in | Policies can redact sensitive data before it reaches the model |
| Retries | You implement backoff logic | Declared in retry_strategy, handled by runtime |
| Pagination | You build cursor/offset logic | Configured via pagination.strategy |
| Error handling | You parse responses manually | Structured via errors.message_path |
| Deployment | Build, package, run a process | Drop a .dadl file into ToolMesh |
DADL describes what this API is. ToolMesh handles how calls to it are executed and controlled. That separation is the architectural shift that matters — not just the line count.
You do not write it — your LLM does
Abschnitt betitelt „You do not write it — your LLM does“DADL was designed to be AI-native. The format is compact, declarative, and close to the structure of existing API specifications.
That means you can hand your LLM an OpenAPI spec, a Swagger definition, or even raw API documentation and ask it to produce a DADL file. In most cases, the result is usable on the first pass. Review it, adjust edge cases, deploy.
Compare that to asking an LLM to generate a complete MCP server. The model has to produce working code — imports, type definitions, error handling, transport setup, build configuration. Every line is a potential bug. The output needs testing, debugging, and manual validation before it goes anywhere near production.
With DADL, the LLM produces a data structure. If a field is wrong, you fix one line of YAML. If a parameter is missing, you add three lines. When something breaks, it is more often a reviewable spec issue than an opaque runtime exception buried in generated integration code.
That is the real unlock. DADL is not just easier for humans to write. It is dramatically easier for AI to produce correctly.
How this scales
Abschnitt betitelt „How this scales“The DeepL example above covers a single endpoint. Real APIs have dozens.
The full DeepL DADL definition covers 13 tools — translate, document upload, document status, glossaries, text improvement, language lists, usage tracking — in under 500 lines of YAML. The GitHub definition covers 202 tools: repositories, issues, pull requests, commits, code search, and more. Building that as a hand-coded MCP server is not a weekend project. It is a product.
As of today, the DADL registry contains 11 backend definitions with 985 tools across APIs like GitHub, Cloudflare, Hetzner Cloud, Linode, GitLab, and others. Each one is a single reviewable YAML file.
As the number of tools grows, there is another scaling concern: context window overhead. ToolMesh addresses this with Code Mode, where the model works through two meta-tools — list_tools and execute_code — instead of carrying hundreds of tool definitions directly in context. That keeps tool discovery efficient regardless of how many backends are connected.
What DADL does not do
Abschnitt betitelt „What DADL does not do“DADL is a declarative format for describing REST APIs. It is not a workflow engine, not a general-purpose programming language, and not a replacement for every kind of MCP server.
If your integration needs custom business logic, complex multi-step orchestration, or non-REST protocols, a custom server is still the right choice. ToolMesh can work with those too — it connects both DADL-described backends and existing MCP servers.
But for the large class of integrations that are fundamentally “call a REST API with the right auth and parameters” — and that is most of them — a custom wrapper is a poor default. It duplicates infrastructure concerns, spreads control logic across codebases, and creates maintenance work that adds no unique value.
The shift
Abschnitt betitelt „The shift“The MCP ecosystem solved the protocol problem. Clients and hosts speak the same language.
But protocol standardization did not make backend integration cheap. Every new API still meant a new server, a new codebase, a new runtime, a new maintenance burden. That is why most teams stop after a handful of integrations and the long tail of useful APIs stays disconnected.
DADL changes the unit of work. Instead of “build a server,” the task becomes “describe the API.” Instead of a software project per integration, you get a reviewable YAML file per backend — executed through a runtime that handles security, credentials, and governance centrally.
That is not just less code. It is less variance, less risk, and a fundamentally better default for connecting AI agents to real systems.
Explore the DADL registry, check out the ToolMesh documentation, or browse the source on GitHub.