LLM & AI docs

LLM & AI API reference

OpenAI-style and Anthropic-style mock endpoints for AI integrations

Endpoints

Pick the route you want to test.

4 routes

Active selection

POST

/api/llm/openai/v1/chat/completions

Sample request

JAVASCRIPT example
1const payload = {
2 "model": "gpt-4o",
3 "messages": [
4 {
5 "role": "system",
6 "content": "You are a helpful assistant."
7 },
8 {
9 "role": "user",
10 "content": "Summarize the value of Fake API for Devs in one paragraph."
11 }
12 ],
13 "temperature": 0.7,
14 "stream": false
15}
16fetch('https://fakeapifordevs.vercel.app/api/llm/openai/v1/chat/completions?page=1&per_page=5&delay=1', {
17 method: 'POST',
18 headers: { 'Content-Type': 'application/json' },
19 body: JSON.stringify(payload)
20})
21 .then(res => res.json())
22 .then(console.log)

Response contract

List responses include pagination metadata and can surface latency information when you add the `delay` query parameter.

Sample response
1{
2 "data": [
3 {
4 "id": "chatcmpl-mock001-1",
5 "object": "chat.completion",
6 "model": "gpt-4o"
7 },
8 {
9 "id": "chatcmpl-mock001-2",
10 "object": "chat.completion",
11 "model": "gpt-4o"
12 }
13 ],
14 "pagination": {
15 "page": 1,
16 "perPage": 5,
17 "total": 42,
18 "totalPages": 9
19 },
20 "meta": {
21 "delayMs": 0
22 }
23}

Query controls

These parameters are supported consistently across collection routes.

page

1-indexed page number. Defaults to 1.

per_page

Number of records per page. Defaults to 10, max 50.

q

Optional text search against names, titles, descriptions, or tags where supported.

delay

Simulate latency in seconds (max 10s).

simulate_error

Force realistic failures like 429, 500, 401, or 503 for resilience testing.

pagination_style

Switch between page, cursor, and link-style pagination on supported collections.

cors

Use `cors=restrict` or `cors=preflight` to test browser integration edge cases.

Integration checklist

  • Use `page` and `per_page` when you need predictable list states.
  • Append `delay` to mimic loading and partial-rendering moments.
  • Point internal examples and SDKs to `/api/llm`.
  • Deep-link the selected method and path when sharing review links.

Playground shortcut

Open the landing page playground with this exact route already selected.

Open playground

https://fakeapifordevs.vercel.app/api/llm/openai/v1/chat/completions?page=1&per_page=5&delay=1

Need another domain?

Explore the rest of the library without leaving the docs system.

AuthenticationE-commerceReal EstateSocial MediaFood DeliverySaaS

Keep these docs online

Buy the maintainers a coffee

Coffee donations help us keep LLM & AI accurate, readable, and easier to test with every release.

  • Covers upkeep for llm docs
  • Funds schema QA sprints