CodexbaseAIHub Docs
CodexbaseAIHub is a unified AI API platform. One API key, one base URL, access to 9 AI engines — chat, code, NLP, vision, and more. All engines share the same authentication pattern, response envelope, and error format.
X-API-Key: cxb_your_key in any request.
Authentication
All API requests require an API key passed in the X-API-Key header. API keys are prefixed with cxb_ and are generated from your dashboard.
X-API-Key: cxb_your_api_key_here
Keys can be scoped to specific engines. An unscoped key (default) has access to all engines your plan supports.
Response Envelope
Every engine response wraps results in a standard envelope:
{
"status": "success", // "success" | "error"
"engine": "engine-slug", // which engine processed the request
"provider": "OpenAI", // upstream AI provider
"latency_ms": 312.5, // server-side latency in milliseconds
// ...engine-specific fields
}
Base URL
All endpoints are relative to:
https://your-domain.com/api/v1
You can also explore all endpoints interactively via the Swagger UI →
Error Handling
All errors return a JSON body with error and code fields:
// 400 — Bad request / validation error
{ "error": "prompt is required", "code": 400 }
// 401 — Missing or invalid API key
{ "error": "Authentication required", "code": 401 }
// 403 — Plan does not include this engine
{ "error": "Forbidden", "code": 403 }
// 429 — Rate limit exceeded
{ "error": "Rate limit exceeded", "code": 429 }
// 500 — Engine or provider error
{ "error": "Engine error. Please try again.", "code": 500 }
Rate Limits
Rate limits are enforced per API key per engine. Limits vary by engine complexity and your plan.
| Engine Type | Default Limit | Plan Override |
|---|---|---|
| Chat / Text | 30 req / min | Up to 200/min on Enterprise |
| Code / Analysis | 20 req / min | Up to 100/min on Pro+ |
| Image Generation | 10 req / min | Up to 50/min on Pro+ |
| Medical Analyzer | 10 req / min | Requires Pro or higher |
When a rate limit is exceeded, you'll receive a 429 response. Wait and retry using exponential backoff.
Image Generation Vision
Generate stunning images from text prompts using DALL-E 3.
Provide a text prompt and receive a generated image URL. Supports size and quality options.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
prompt |
string | Required | Image description prompt | — |
size |
string 256x256 | 512x512 | 1024x1024 | 1792x1024 | 1024x1792 |
Optional | 1024x1024 |
|
quality |
string standard | hd |
Optional | standard |
|
style |
string vivid | natural |
Optional | vivid |
Code Examples
curl -X POST https://your-domain.com/api/v1/image/generate \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{"prompt": "example text"}'
Response
{
"status": "success",
"engine": "image-generation",
"provider": "OpenAI",
"url": "...", // Generated image URL
"revised_prompt": "...", //
"latency_ms": 245.3
}
GPT Assistant Chat
Advanced conversational assistant with Groq LLaMA / GPT-4o fallback. Supports multi-turn chat, custom system prompts, and structured outputs.
Full-featured conversational assistant. Uses Groq LLaMA (fast, free) first, then GPT-4o. Pass a message for single-turn, or messages array for multi-turn.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
message |
string | Required | User message | — |
messages |
array | Optional | Full conversation history for multi-turn chat | — |
system |
string | Optional | System persona | You are a helpful assistant. |
max_tokens |
integer | Optional | 1000 |
|
temperature |
number | Optional | 0.7 |
|
response_format |
string text | json_object |
Optional | text |
Code Examples
curl -X POST https://your-domain.com/api/v1/gpt/chat \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{"message": "example text"}'
Response
{
"status": "success",
"engine": "gpt-assistant",
"provider": "Groq / OpenAI",
"reply": "...", //
"model_used": "...", //
"tokens_used": 0, //
"latency_ms": 245.3
}
Sentiment Analysis NLP
Analyze emotional tone. Basic = instant rule-based. Detailed = Groq AI with emotion bars.
Provide text and receive POSITIVE/NEGATIVE/NEUTRAL with confidence score.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
text |
string | Required | Text to analyze | — |
mode |
string basic | detailed |
Optional | basic=rule-based, detailed=AI | basic |
Code Examples
curl -X POST https://your-domain.com/api/v1/sentiment/analyze \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{"text": "example text"}'
Response
{
"status": "success",
"engine": "sentiment-analysis",
"provider": "Built-in / Groq",
"result": "...", //
"latency_ms": 245.3
}
Text Generation NLP
Generate text using Groq LLaMA first, falls back to xAI → OpenAI → DeepSeek.
Send a prompt and receive AI-generated text.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
prompt |
string | Required | Text prompt | — |
max_tokens |
integer | Optional | 500 |
|
temperature |
number | Optional | 0.7 |
|
system |
string | Optional | System instruction | — |
Code Examples
curl -X POST https://your-domain.com/api/v1/text/generate \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{"prompt": "example text"}'
Response
{
"status": "success",
"engine": "text-generation",
"provider": "Groq / OpenAI",
"result": "...", //
"tokens_used": 0, //
"latency_ms": 245.3
}
Code Assistant Code
AI-powered code generation, review, debugging, and optimization using Claude.
Submit code + an action (explain/review/generate/debug/optimize/document) and receive AI assistance.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
action |
string explain | review | generate | debug | optimize | document |
Required | What to do with the code | — |
code |
string | Optional | The code snippet | — |
prompt |
string | Optional | Additional instruction | — |
language |
string | Optional | Programming language | python |
Code Examples
curl -X POST https://your-domain.com/api/v1/code/assist \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{"action": "example text"}'
Response
{
"status": "success",
"engine": "code-assistant",
"provider": "Anthropic",
"result": "...", // AI response
"action": "...", //
"language": "...", //
"latency_ms": 245.3
}
Medical Result Analyzer Medical
AI-powered analysis of medical lab results, blood work, and clinical notes. Explains findings in plain language, flags abnormal values, and suggests follow-up questions. For informational purposes only — not medical advice.
Submit raw lab results text or clinical notes. Select audience: 'patient' (simple language) or 'clinician' (technical). Engine flags abnormal values, explains significance, and suggests questions.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
text |
string | Required | Raw lab results, blood work, or clinical notes text | — |
audience |
string patient | clinician |
Optional | Target audience: 'patient' for plain language, 'clinician' for technical | patient |
include_questions |
boolean | Optional | Whether to include suggested follow-up questions for the doctor | True |
max_tokens |
integer | Optional | Maximum tokens in the response (100–2000) | 800 |
Code Examples
curl -X POST https://your-domain.com/api/v1/medical/analyze \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{"text": "HbA1c: 8.2% (High). Fasting glucose: 185 mg/dL. LDL: 142 mg/dL."}'
Response
{
"status": "success",
"engine": "medical-analyzer",
"provider": "xAI Grok / OpenAI",
"summary": "...", // Plain-language summary of findings
"abnormal": ..., // List of flagged abnormal values
"severity": "...", // Overall severity: normal/mild/moderate/high
"questions": ..., // Suggested questions for the doctor
"disclaimer": "...", // Medical disclaimer
"tokens_used": 0, //
"latency_ms": 245.3
}
Grok Chat Chat
Conversational AI using Groq LLaMA (fast, free) first, then xAI → OpenAI → DeepSeek.
Chat with Groq LLaMA or xAI Grok. Supports multi-turn conversations.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
message |
string | Required | User message | — |
messages |
array | Optional | Conversation history | — |
system |
string | Optional | You are a helpful AI assistant. |
|
max_tokens |
integer | Optional | 1000 |
|
temperature |
number | Optional | 0.7 |
Code Examples
curl -X POST https://your-domain.com/api/v1/grok/chat \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{"message": "example text"}'
Response
{
"status": "success",
"engine": "grok-chat",
"provider": "Groq / xAI",
"reply": "...", //
"model_used": "...", //
"tokens_used": 0, //
"latency_ms": 245.3
}
LabWise Lab Interpreter Medical
AI-powered medical lab result interpretation. Submit text, PDF, or image lab results — get plain-language explanations, abnormal flags, agentic self-correction (up to 3 loops), and personalized wellness recommendations. PII stripped before every AI call (NDPR/POPIA). Supports 80+ lab tests. Powered by Google Gemini (GEMINI_KEY_1/2).
Submit lab result text, PDF, or image — receive a full agentic interpretation. Supports CBC, metabolic panel, lipids, thyroid, hormones, kidney, liver, infectious disease, and 80+ other tests. 3-gate validation ensures quality. PII anonymized before AI processing. Requires GEMINI_KEY_1 and/or GEMINI_KEY_2 in Admin → Agent API Keys.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
text |
string | Optional | Lab result text. Required when mode=interpret/simplify/recommend and no file uploaded. | — |
patient_id |
string | Optional | Anonymized patient ID — never use real names or IDs | ANON |
mode |
string interpret | simplify | recommend |
Optional | interpret=full analysis | simplify=plain language | recommend=wellness tips | interpret |
simplify_level |
string basic | child | bullet |
Optional | For mode=simplify: basic=plain English, child=12yr level, bullet=emoji list | basic |
format |
string full | brief |
Optional | For mode=recommend: full=narrative+ranked list, brief=5 tips | full |
Code Examples
curl -X POST https://your-domain.com/api/v1/labwise/interpret \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{}'
Response
{
"status": "success",
"engine": "labwise",
"provider": "Google Gemini",
"interpretation": "...", // AI-generated interpretation text
"confidence": 0, // AI quality confidence 0.0–1.0
"metrics_found": 0, // Number of lab tests parsed
"refinement_loops": 0, // Agentic self-correction loops used
"was_refined": ..., //
"warnings": ..., //
"processing_time_ms": 0, //
"feedback_id": "...", //
"latency_ms": 245.3
}
Text Classification Custom ML
Custom ML engine — classify text into your own categories. Train on your CSV, Excel, or database data, then classify instantly. No external AI provider required.
## Text Classification — Custom Engine This is a **custom engine** — runs entirely on your own training data using scikit-learn. No external AI API key is required. ### How to use 1. **Train a model** — upload a CSV/Excel file or connect a database. 2. **Classify** — send text with the `model_key` returned from training. ### Training data format CSV/Excel: `product_name`, `product_description`, `category_name` columns. Common aliases (name, title, description, category, label) accepted. ### System model Admin trains a system-wide model. Use `model_key: 'system'` to reference it.
Request Parameters
| Field | Type | Required | Description | Default |
|---|---|---|---|---|
text |
string | Required | Text to classify | — |
model_key |
string | Required | Cloudinary public_id of your trained model (returned by /train). Use 'system' for the admin-trained system model. | — |
top_n |
integer | Optional | Number of top predictions to return (1–10) | 3 |
Code Examples
curl -X POST https://your-domain.com/api/v1/text-classification/classify \
-H "X-API-Key: cxb_your_key_here" \
-H "Content-Type: application/json" \
-d '{"model_key": "codexbase/tc_user_myshop", "text": "wireless noise-cancelling headphones 30hr battery"}'
Response
{
"status": "success",
"engine": "text-classification",
"provider": "Custom (scikit-learn ensemble — no API key needed)",
"results": ..., //
"model_info": ..., //
"model_key": "...", //
"latency_ms": 245.3
}
SDKs & Libraries
No SDK required — CodexbaseAIHub is a standard REST API. Use any HTTP client in any language. The examples above show cURL, Python requests, JavaScript fetch, and Go's net/http.
For interactive exploration, use the Swagger UI which lets you try every endpoint with your API key in the browser.
Plans & Limits
| Plan | Daily Requests | Engines | API Keys |
|---|---|---|---|
| Free | 100 | Free-tier engines | 3 |
| Basic | 1,000 | Free + Basic engines | 10 |
| Pro | 10,000 | All engines | 10 |
| Enterprise | Unlimited | All engines + priority | Unlimited |