Documentation
Fetch model details through the API, with the exact fields you actually need.
The most common workflow in ModelMeta is simple: search the registry, pick a model slug, then request the full model record. This page documents that flow in detail and shows the exact endpoints, examples, and response fields involved.
Find the model slug first
Use the list endpoint to search the registry and read the `model` field. That slug is what the detail endpoint expects.
Request the detail endpoint
Call `GET /v1/models/{slug}` to retrieve the full model record with pricing, limits, parameters, and capability flags.
Use the right base URL
Inside the ModelMeta web app use `/api/backend/...`. Outside the app call your upstream registry host directly.
Send auth only when needed
If your upstream is protected, include a bearer token. The in-app `/api/backend` proxy forwards the current session automatically.
Primary endpoint
Get a single model detail
Use the model slug as the path parameter. The slug is the `model` field returned by the list endpoint, not the display name.
/v1/models/{slug}Example slug: gpt-4o, claude-3-7-sonnet, gemini-2.5-pro.
Inside the ModelMeta app
Use the same-origin proxy so the current session cookie can be forwarded upstream.
/api/backend/v1/models/{slug}Direct upstream call
Call your registry host directly when integrating from another service or backend.
https://api.modelmeta.dev/v1/models/{slug}How to get the slug
Search first, then open the detail record.
- 1
Search the model list
Request the list endpoint with `search`, `page`, and `page_size`.
- 2
Read the `model` field
That field is the stable slug for the detail route.
- 3
Call the detail route
Request `GET /v1/models/{slug}` to retrieve the full model object.
Step 1
Search the list endpoint
Use this when you only know part of the model name and need the canonical slug first.
curl 'https://api.modelmeta.dev/v1/models?search=gemini&page=1&page_size=20' \
-H 'Authorization: Bearer YOUR_API_KEY'
# read the "model" field from the list response
# for example: "gemini-2.5-pro"
curl 'https://api.modelmeta.dev/v1/models/gemini-2.5-pro' \
-H 'Authorization: Bearer YOUR_API_KEY'List response
Read the `model` field from the list payload
The detail endpoint expects the slug from the list response, not `model_name`.
{
"object": "list",
"data": [
{
"id": "mdl_123",
"provider": "google",
"model": "gemini-2.5-pro",
"model_name": "Gemini 2.5 Pro",
"input_price_per_million": 1.25,
"output_price_per_million": 10,
"context_window": 2097152,
"is_flagship": true
}
],
"total": 1,
"page": 1,
"page_size": 20,
"total_pages": 1
}List endpoint reference
Search the registry before you open a specific model record
In most integrations the first step is `GET /v1/models`, because that response gives you the canonical `model` slug for the detail route.
/v1/modelssearchgeminipage1page_size20provider_sluggoogleBuilt-in app helper
Use the service layer already in this codebase
If you are building inside this Next.js app, you do not need to write the raw fetch yourself. The existing service wrapper already exposes `modelsApi.getModel(slug)`.
TypeScript
Use `modelsApi.getModel` inside the app
This is the cleanest path when you are building UI in the current ModelMeta codebase.
import { modelsApi } from '@/services';
async function loadModelDetail() {
const model = await modelsApi.getModel('gemini-2.5-pro');
return {
name: model.model_name,
inputPrice: model.input_price,
outputPrice: model.output_price,
contextWindow: model.context_window,
controls: model.config_parameters,
features: model.features,
};
}Request examples
Ways to request a single model detail
Choose the request style that matches where you are integrating. The response body is a model object, not a wrapped envelope.
curl
Call the upstream registry directly
Use this from a backend service, CLI script, or external integration.
curl 'https://api.modelmeta.dev/v1/models/gemini-2.5-pro' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-H 'Accept: application/json'curl
Call through the app proxy
Use this from the same ModelMeta deployment when a logged-in session should be forwarded.
curl 'http://localhost:3002/api/backend/v1/models/gemini-2.5-pro' \
-H 'Cookie: YOUR_SESSION_COOKIE' \
-H 'Accept: application/json'Browser
Fetch from a frontend running on the same domain
The proxy route avoids CORS and keeps the session handling on the server side.
const slug = 'gemini-2.5-pro';
const response = await fetch(`/api/backend/v1/models/${slug}`, {
method: 'GET',
credentials: 'include',
headers: {
Accept: 'application/json',
},
});
if (!response.ok) {
throw new Error(`Failed to load model detail: ${response.status}`);
}
const model = await response.json();
console.log(model.model_name, model.input_price, model.context_window);Server
Fetch from Node.js or a backend worker
Use the direct upstream endpoint and attach your API key when required.
const slug = 'gemini-2.5-pro';
const response = await fetch('https://api.modelmeta.dev/v1/models/' + slug, {
headers: {
Authorization: 'Bearer ' + process.env.MODELMETA_API_KEY,
Accept: 'application/json',
},
});
if (!response.ok) {
throw new Error(`Failed to load model detail: ${response.status}`);
}
const model = await response.json();
console.log(model.config_parameters, model.features?.structured_output);Response
Example detail payload
This is the shape you can expect from `GET /v1/models/{slug}`.
{
"id": "mdl_123",
"provider": "google",
"model": "gemini-2.5-pro",
"model_name": "Gemini 2.5 Pro",
"family": "gemini-2.5",
"tagline": "High-end reasoning and multimodal model",
"description": "Detailed profile for pricing, limits, parameters, and capabilities.",
"currency": "USD",
"input_price": 1.25,
"output_price": 10,
"cached_input_price": 0.3125,
"context_window": 2097152,
"max_output_tokens": 32768,
"input_modalities": ["text", "image"],
"output_modalities": ["text"],
"parameters": {
"temperature": { "supported": true, "min": 0, "max": 2, "default": 1 },
"top_p": { "supported": true, "min": 0, "max": 1, "default": 0.95 },
"reasoning_effort": { "supported": true }
},
"features": {
"streaming": true,
"structured_output": true,
"function_calling": true,
"reasoning": true
},
"rate_limits": [
{ "tier": "default", "rpm": 60, "tpm": 60000, "rpd": 1000, "batch_limit": 100 }
],
"updated_at": 1715596800
}Status codes
What to handle
The model exists and the detail record is returned.
Your upstream requires auth and the request is missing valid credentials.
The slug does not exist in the registry.
The upstream registry is unavailable or returned an internal error.
Common mistakes
Use `model`, not `model_name`
The path parameter is the canonical slug from the list endpoint. Display names are for UI, not routing.
Do not expect a wrapped `{ data: ... }` payload
The model detail endpoint returns the model object directly. Treat the response body itself as the record.
Use `/api/backend` only inside the app
That proxy is for same-origin app requests. External scripts or backends should call the upstream base URL directly.
Handle missing optional sections
Some models will not publish rate limits, benchmark data, or every pricing field. Treat nested sections as optional.
Related endpoints
/v1/modelsList models
Search and paginate the registry before opening a specific model detail page.
/v1/models/{slug}Get model detail
Return the full model record with pricing, limits, parameters, and capability data.
/v1/providers/{slug}Get provider detail
Return provider metadata, docs URLs, and provider-level context for a model source.
Response field guide
The fields that matter most on a detail response
The model object is intentionally structured around the questions users ask when they compare models: what it costs, what limits it has, which controls are supported, and what it can actually do.
Identity
Fields that identify the model and how it should be displayed.
idInternal record id.
modelStable API slug used in the detail route.
model_nameHuman-readable display name.
providerProvider slug, for example `openai` or `google`.
family / family_nameModel family grouping.
version_tagOptional release or version label.
Pricing
Commercial fields normalized to per-million-token pricing.
currencyUsually `USD`.
input_priceInput token price per 1M tokens.
output_priceOutput token price per 1M tokens.
cached_input_priceCached input price when published.
batch_input_price / batch_output_priceBatch pricing fields when supported.
Limits and context
Operational ceilings that affect whether a model fits a workload.
context_windowTotal context supported by the model.
max_input_tokensMaximum input tokens, when available.
max_output_tokensMaximum generation size, when available.
rate_limitsPer-tier RPM, TPM, RPD, and batch queue limits.
Capabilities and controls
Nested objects that describe what the model can do and how it can be tuned.
endpointsWhich API surfaces are supported, such as chat or realtime.
featuresStructured flags such as streaming or structured output.
toolsTooling support such as web search or code interpreter.
parametersSupported runtime controls like `temperature` and `top_p`.
input_modalities / output_modalitiesText, image, audio, video, and related modes.