Keep Your Infrastructure
Deploy on your terms, route with intelligence
Point to your self-hosted models, fine-tuned instances, or specialized LLMs. Any OpenAI-compatible API works: vLLM, Ollama, custom deployments.
Privacy-first multi-model AI routing with intelligent orchestration. Use our curated models, integrate your own custom models, or build a fully custom stack and deploy on your infrastructure with our routing intelligence. Everything you need to integrate advanced AI capabilities into your applications.
Use our curated models, your own custom models, or build a fully custom stack. Deploy on your infrastructure, route with our intelligence.
Deploy on your terms, route with intelligence
Point to your self-hosted models, fine-tuned instances, or specialized LLMs. Any OpenAI-compatible API works: vLLM, Ollama, custom deployments.
Works with OpenAI-compatible endpoints
Integrate seamlessly with any service using the OpenAI API standard. No custom adapters needed. If it speaks OpenAI, AskARC routes it.
Evaluate new models in production context
A/B test custom models against established options. Compare fine-tuned variants. Run new releases in real workloads before they hit our stack.
Bring your certified models, we handle routing
Already managing HIPAA, SOC2, or GDPR-compliant infrastructure? Integrate those models with AskARC. Your sensitive data stays in your environment. We only handle routing intelligence.
https://api.askarc.app
Access the developer portal at askarc.app to manage your API keys and credits
Generate your API key from the developer portal. Keep it secure and never share it publicly
Use your API key to send messages to the FenxChat API and get AI-powered responses
Integrate AI capabilities into your application and scale as your needs grow
Check out our transparent pricing and get your API key
For external applications making API calls
X-API-Key: fxk_your_api_key_here
Authorization: Bearer fxk_your_api_key_here
For portal management operations
Authorization: Bearer your_jwt_token
Required for API key management, credit operations, and analytics
/api/v1/auth/login
Authenticate to access the portal and manage API keys
email
string
Required
User email address
Example: "user@fenxlabs.ai"
password
string
Required
User password
Example: "SecurePassword123!"
{
"success": true,
"data": {
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"user": {
"email": "user@fenxlabs.ai",
"credits": 1000
}
}
}
{
"success": false,
"error": "Invalid email or password"
}
curl -X POST https://api.askarc.app/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{
"email": "user@fenxlabs.ai",
"password": "SecurePassword123!"
}'
const response = await fetch('https://api.askarc.app/api/v1/auth/login', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
email: 'user@fenxlabs.ai',
password: 'SecurePassword123!'
})
});
const data = await response.json();
console.log(data);
import requests
response = requests.post(
'https://api.askarc.app/api/v1/auth/login',
json={
'email': 'user@fenxlabs.ai',
'password': 'SecurePassword123!'
}
)
data = response.json()
print(data)
/api/v1/messages
Send a message and receive an AI-generated response
message
string
Required
The message content to send to the AI
Example: "What is the capital of France?"
model
string
Optional: Specify a model (auto-selects if not provided)
Options: "gpt-4", "claude-3", "gemini-pro", "mistral-large"
temperature
number
Optional: Controls randomness (0.0 to 1.0)
Default: 0.7
{
"success": true,
"data": {
"id": "msg_abc123",
"response": "The capital of France is Paris.",
"model": "gpt-4",
"credits_used": 10,
"timestamp": "2025-10-12T10:30:00Z"
}
}
{
"success": false,
"error": "Invalid or missing API key"
}
curl -X POST https://api.askarc.app/api/v1/messages \
-H "Content-Type: application/json" \
-H "X-API-Key: fxk_your_api_key_here" \
-d '{
"message": "What is the capital of France?",
"temperature": 0.7
}'
const response = await fetch('https://api.askarc.app/api/v1/messages', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-API-Key': 'fxk_your_api_key_here'
},
body: JSON.stringify({
message: 'What is the capital of France?',
temperature: 0.7
})
});
const data = await response.json();
console.log(data.data.response);
import requests
response = requests.post(
'https://api.askarc.app/api/v1/messages',
headers={
'Content-Type': 'application/json',
'X-API-Key': 'fxk_your_api_key_here'
},
json={
'message': 'What is the capital of France?',
'temperature': 0.7
}
)
data = response.json()
print(data['data']['response'])
/api/v1/models
Retrieve a list of all available AI models and their capabilities
{
"success": true,
"data": {
"models": [
{
"id": "gpt-4",
"name": "GPT-4",
"provider": "OpenAI",
"description": "Most capable GPT-4 model"
},
{
"id": "claude-3",
"name": "Claude 3",
"provider": "Anthropic",
"description": "Advanced reasoning and analysis"
},
{
"id": "gemini-pro",
"name": "Gemini Pro",
"provider": "Google",
"description": "Multimodal understanding"
},
{
"id": "mistral-large",
"name": "Mistral Large",
"provider": "Mistral AI",
"description": "High performance language model"
}
]
}
}
curl -X GET https://api.askarc.app/api/v1/models \
-H "X-API-Key: fxk_your_api_key_here"
const response = await fetch('https://api.askarc.app/api/v1/models', {
headers: {
'X-API-Key': 'fxk_your_api_key_here'
}
});
const data = await response.json();
console.log(data.data.models);
import requests
response = requests.get(
'https://api.askarc.app/api/v1/models',
headers={
'X-API-Key': 'fxk_your_api_key_here'
}
)
data = response.json()
print(data['data']['models'])
Join our developer community on Discord for technical support, code examples, and real-time help from our engineering team. We're here to help you build amazing things.
API documentation not clear? We want your feedback: feedback@fenxlabs.ai
Start building with AskARC's multi-model AI routing today. Check out our transparent pricing or try the user-friendly chat interface.