AskARC Technical FAQ

Developer questions about BYOMS, multi-model AI routing, API integration, compliance architecture, and pricing. Find answers to build with confidence.

Getting Started

Quick Answer: AskARC is a privacy-first multi-model AI platform that intelligently routes your queries to the best AI model (ChatGPT, Claude, Gemini, Mistral and many more) without you needing to choose.

How it works: Instead of managing multiple AI subscriptions or guessing which model to use, AskARC analyzes your question and routes it to the most suitable model automatically. Built in Europe with GDPR compliance and EU AI Act standards at its core.

Bottom Line: One interface, multiple expert AI models, intelligent routing, privacy-first architecture.

Quick Answer: AskARC gives you access to multiple AI models (ChatGPT, Claude, Gemini, Mistral) in one interface with intelligent routing, while ChatGPT or Claude are single-model platforms.

Key Differences:

Multi-Model Access: Get the best answer from whichever model is strongest for your question
Privacy-First: No single model sees your full conversation history
BYOMS Support: Integrate your own self-hosted or fine-tuned models
EU Standards: Built for GDPR compliance and EU AI Act alignment

Bottom Line: Choose AskARC for flexibility, privacy, and access to multiple AI experts in one place.

Quick Answer: No technical knowledge needed for the chat interface. For API integration, basic programming knowledge is required.

Two Ways to Use AskARC:

Chat Interface: No technical skills needed. Just ask questions like you would with ChatGPT.
API Integration: Requires programming knowledge (JavaScript, Python, etc.) to integrate into your applications.

Bottom Line: Anyone can use the chat interface. Developers can use the API for custom integrations.

Quick Answer: Sign up at askarc.app, navigate to API settings, generate an API key, and make your first request using our OpenAI-compatible endpoint.

4-Step Process:

1. Sign Up: Create an account at askarc.app
2. Generate API Key: Go to Settings → API Keys → Create New Key
3. Make First Request: Use endpoint https://api.askarc.app/v1/chat/completions
4. Build & Scale: Integrate into your application

Bottom Line: Getting started takes less than 5 minutes. Full API documentation available at askarc.app/api.html

Quick Answer: Free tier with 100 calls/month. Premium tier at €9.99/month for 1,000 calls. Volume API pricing starts at €0.0015/call.

Pricing Tiers:

Free: 100 API calls/month (€0)
Premium: 1,000 calls/month for €9.99 (best value)
API Pay-As-You-Go: €0.0015/call for 0-100K calls (volume discounts available)

Bottom Line: Simple, transparent pricing with generous free tier for testing. One API call = one request, no hidden multipliers.

Quick Answer: Yes! Free tier includes 100 API calls per month, perfect for testing and small projects.

What's Included:

100 API calls/month: Enough for thorough testing
Access to all models: ChatGPT, Claude, Gemini, Mistral
Intelligent routing: Same routing logic as paid tiers
No credit card required: Start testing immediately

Bottom Line: Test AskARC risk-free with our generous free tier before committing to a paid plan.

Quick Answer: Multi-model routing, privacy-first architecture, unfiltered access, and EU values-driven governance make AskARC unique.

Key Differentiators:

✅ Multi-model routing: Smarter answers, no manual switching between AI tools
✅ Privacy-first architecture: No profiling, no tracking, GDPR compliant
✅ Unfiltered access: Ask what matters, get real answers without excessive filtering
✅ Lightweight by design: Fast, efficient, scalable infrastructure
✅ Built for trust: EU values, transparent governance, AI Act aligned

Bottom Line: AskARC combines the best of multiple AI models with European privacy standards and transparent governance.

Quick Answer: Yes — AskARC does exactly that. It dynamically connects to leading AI models like ChatGPT, Claude, Gemini, and Mistral.

How It Works: AskARC routes each query to the one model best suited to respond. You get the benefit of multiple expert engines in one seamless experience, without needing to guess which AI tool to use or juggle multiple subscriptions.

Bottom Line: One interface, multiple AI models working together intelligently. That's AskARC.

Quick Answer: AskARC does. It evaluates your prompt in real-time and selects the most capable model based on context, complexity, and topic.

How Selection Works: AskARC analyzes your question's characteristics (technical depth, creative requirements, language, complexity) and routes to the AI model with the strongest performance profile for that specific type of query. No manual switching, no guesswork.

Bottom Line: Intelligent, automatic model selection so you always get the best answer.

Quick Answer: AskARC is built for that. It's privacy-first, model-agnostic, and designed to deliver fast, accurate answers by leveraging multiple AI systems behind the scenes.

Why AskARC: Unlike single-model platforms (ChatGPT, Claude, Gemini), AskARC gives you access to all of them through one interface with intelligent routing. You get the strengths of each model without managing multiple subscriptions or switching between apps.

Bottom Line: If you want the best of all AI models in one privacy-focused platform, AskARC is built for you.

Quick Answer: Use AskARC. It routes your query instantly, returns a consolidated answer, and avoids the delays of switching apps or rephrasing for different models.

Speed Benefits: Instead of opening ChatGPT, then Claude, then Gemini to compare answers, AskARC selects the best model in under 50ms and gives you one high-quality response. Faster routing, better answers, no context-switching overhead.

Bottom Line: One question, one instant answer from the best-suited model. That's the fastest way.

Quick Answer: AskARC handles document summarization beautifully by routing to the model best equipped for your specific needs.

How It Works: Paste your document, and AskARC selects the optimal model based on document length, complexity, and your requirements — whether you need a quick overview, detailed breakdown, technical analysis, or multilingual adaptation.

Supported Formats: Plain text, code, technical papers, reports, articles, and more. AskARC routes to Claude for long-form analysis, GPT for structured summaries, or Gemini for multimodal content.

Bottom Line: Smart model selection means better summaries tailored to your document type.

API & Integration

Quick Answer: Any language that can make HTTP requests works. Popular choices include JavaScript, Python, PHP, Ruby, Go, Java, and C#.

Why it works: AskARC uses a standard REST API with OpenAI-compatible endpoints. Any language with HTTP client libraries can integrate seamlessly.

Popular Libraries:

JavaScript/Node.js: axios, node-fetch, native fetch
Python: requests, httpx, aiohttp
PHP: Guzzle, cURL
Ruby: HTTParty, Faraday

Bottom Line: If your language can make HTTP POST requests, it can use AskARC.

Quick Answer: Use your API key in the request header as X-API-Key: fxk_your_key or Authorization: Bearer fxk_your_key.

Two Authentication Methods:

Recommended: X-API-Key: fxk_your_api_key_here
Alternative: Authorization: Bearer fxk_your_api_key_here

Important: API keys start with fxk_ and should be kept secret. Never commit them to version control or share them publicly.

Bottom Line: Simple header-based authentication. One key, unlimited requests within your tier limits.

Quick Answer: POST requests to /v1/chat/completions with OpenAI-compatible JSON format.

Basic Request Structure:

Endpoint: https://api.askarc.app/v1/chat/completions
Method: POST
Headers: X-API-Key, Content-Type: application/json
Body: JSON with messages array and optional parameters

Bottom Line: If you've used OpenAI's API, you already know how to use AskARC. Same format, intelligent routing.

Quick Answer: Check HTTP status codes. 200 = success, 401 = bad auth, 429 = rate limit, 500 = server error. Implement exponential backoff for retries.

Common Error Codes:

200 OK: Request successful
400 Bad Request: Invalid request format
401 Unauthorized: Invalid or missing API key
429 Too Many Requests: Rate limit exceeded
500 Server Error: Temporary server issue

Best Practice: Implement retry logic with exponential backoff (1s, 2s, 4s delays) for 429 and 500 errors.

Bottom Line: Standard REST error handling. Monitor status codes and implement graceful retries.

Quick Answer: Yes! AskARC works perfectly with AWS Lambda, Vercel Functions, Netlify Functions, Cloudflare Workers, and all serverless platforms.

Why it works: AskARC API uses simple HTTP requests with no long-lived connections or special requirements. Deploy anywhere that supports HTTP clients.

Popular Platforms:

AWS Lambda: Use with Node.js, Python, or any runtime
Vercel/Next.js: API routes and edge functions
Netlify: Netlify Functions (Node.js)
Cloudflare Workers: Edge runtime compatible

Bottom Line: Serverless-friendly API. No special configuration needed.

Quick Answer: Routing decision: <50ms. Full response: 1-5 seconds depending on query complexity and model selected.

Performance Breakdown:

Routing Intelligence: Under 50ms to select best model
Simple Queries: 1-2 seconds for straightforward questions
Complex Queries: 3-5 seconds for detailed analysis
Streaming: First tokens in <1 second (when enabled)

Bottom Line: Fast routing, competitive response times. Use streaming for immediate user feedback.

BYOMS & Custom Models

Quick Answer: BYOMS lets you integrate your own self-hosted, fine-tuned, or custom AI models with AskARC's intelligent routing, while keeping your data and models under your control.

How it works: Point AskARC to your OpenAI-compatible endpoint (vLLM, Ollama, custom deployment). AskARC handles routing metadata only—your sensitive data never leaves your infrastructure.

Key Benefits:

Your Infrastructure: Models run in your VPC or on-premise
Your Data: Never leaves your network
Your Models: Fine-tuned, specialized, or custom
Our Routing: Intelligent model selection

Bottom Line: Combine AskARC's routing intelligence with your own models and complete data control.

Quick Answer: Any OpenAI-compatible endpoint works: vLLM, Ollama, Text Generation Inference, FastChat, or custom deployments.

Compatible Platforms:

vLLM: High-performance LLM serving (most popular)
Ollama: Local model deployment
Text Generation Inference: Hugging Face's TGI
FastChat: OpenAI-compatible server
Custom: Any endpoint with /v1/chat/completions

Bottom Line: If it speaks OpenAI API format, AskARC can route to it. No custom adapters needed.

Quick Answer: Contact us with your endpoint URL and model details. We configure the routing, you keep full control of your infrastructure.

Integration Steps:

1. Deploy Your Model: Use vLLM, Ollama, or custom setup
2. Expose Endpoint: Make accessible to AskARC (VPN/VPC peering if needed)
3. Contact Support: Share endpoint URL and model info
4. Test & Go Live: We configure routing, you test

Bottom Line: Simple integration process. Join our Discord or email support@askarc.app to get started.

Quick Answer: Yes! BYOMS is perfect for fine-tuned models. Integrate your domain-specific or specialized models while benefiting from intelligent routing.

Common Use Cases:

Domain-Specific: Medical, legal, financial fine-tuning
Company Knowledge: Models trained on internal data
Language-Specific: Multilingual or dialect specializations
Task-Specific: Code generation, summarization, etc.

Bottom Line: Fine-tuned models are a perfect fit for BYOMS. Your specialized knowledge stays private.

Quick Answer: AskARC's router analyzes your query metadata (complexity, topic, length) and routes to your custom model when it's the best fit, or to our curated models otherwise.

Routing Intelligence:

Metadata Analysis: Query topic, length, complexity
Model Matching: Routes to your custom model for relevant queries
Fallback: Uses curated models for out-of-scope queries
No Data Exposure: Only routing metadata analyzed, not content

Bottom Line: Intelligent routing to your models when appropriate, with privacy-preserving metadata-only analysis.

Quick Answer: BYOMS: you pay infrastructure costs (GPUs, hosting) + AskARC routing fee. Managed: you pay per-call pricing only. BYOMS becomes cheaper at high volume.

Cost Breakdown:

Managed Models: €0.0015/call, simple and predictable
BYOMS: Your infra costs (fixed) + routing fee (per-call)
Breakeven: Typically 100K+ calls/month
High Volume: BYOMS significantly cheaper

Bottom Line: Start with managed for simplicity. Switch to BYOMS for volume savings or compliance requirements.

Quick Answer: Yes! Use BYOMS to A/B test new models against your production stack before switching. Run parallel tests without risk.

Testing Workflow:

1. Deploy Test Model: New version in your infrastructure
2. Add to AskARC: Configure as BYOMS endpoint
3. A/B Test: Route percentage of traffic to new model
4. Compare Results: Evaluate before full rollout

Bottom Line: Test new models in real production context without disrupting your stack.

Quick Answer: Yes! Integrate multiple custom models. AskARC routes to the most appropriate one based on query characteristics.

Multi-Model Scenarios:

Specialized Models: Code model, writing model, analysis model
Size Variants: 7B for simple, 70B for complex queries
Regional Models: Different models for different languages
Compliance: HIPAA model, general model

Bottom Line: Build a custom model stack optimized for your exact use case. Full flexibility.

Compliance & Security

Quick Answer: No, AskARC is not HIPAA certified. However, with BYOMS, you can integrate your HIPAA-compliant infrastructure while using our routing intelligence.

How It Works: AskARC's router is pass-through only and handles routing metadata (model selection, request size) but never sees your data content. With BYOMS, you bring your HIPAA-certified models in your VPC. Your sensitive data never leaves your environment.

Architecture:

Your Infrastructure: HIPAA-certified models you control
AskARC Router: Metadata analysis only (no PHI/PII)
Your Data: Stays in your network
Your Certifications: You maintain BAAs and compliance

Bottom Line: Use BYOMS to keep sensitive data in your compliant infrastructure while benefiting from intelligent routing.

Quick Answer: With BYOMS, yes. Your sensitive data stays in your compliant infrastructure. Without BYOMS, data is processed by third-party model providers (OpenAI, Anthropic, etc.).

Two Scenarios:

With BYOMS: Your data never leaves your network. You maintain full control and compliance.
Without BYOMS: Data is sent to model providers per their terms. Review their compliance certifications.

Best Practice: For PHI, PII, or regulated data, use BYOMS with your certified infrastructure.

Bottom Line: BYOMS is designed specifically for organizations with sensitive data requirements.

Quick Answer: BYOMS keeps your data in your certified environment. AskARC only sees routing metadata, never your actual data. You maintain your certifications, we provide routing.

Data Flow with BYOMS:

1. Request Arrives: AskARC analyzes routing metadata only
2. Routing Decision: Based on query characteristics (topic, complexity)
3. Your Model Processes: Data stays in your VPC/network
4. Response Returns: Through your infrastructure

What AskARC Sees: Query length, topic category, model preference. What AskARC Never Sees: Your actual data content, PHI, PII, sensitive information.

Bottom Line: You own compliance, we own routing. Clear separation of concerns.

Quick Answer: With BYOMS: Only routing metadata (no content). Without BYOMS: Requests are passed to model providers per their privacy policies.

With BYOMS (Metadata Only):

We See: Query length, topic category, routing preferences
We Don't See: Your actual question content, responses, or sensitive data
Storage: Routing logs only (no content)

Without BYOMS: Requests go to model providers (OpenAI, Anthropic, Google). Review their privacy policies for data handling details.

Bottom Line: BYOMS = metadata-only architecture. Standard API = model provider terms apply.

Quick Answer: AskARC is built and operated in the European Union (Netherlands) with GDPR compliance and EU AI Act alignment.

Infrastructure:

Primary Region: EU (Netherlands)
Data Governance: GDPR compliant
Standards: EU AI Act aligned
With BYOMS: Your data stays in your chosen region

Important: Model providers (OpenAI, Anthropic, etc.) may process data in different regions. Check their policies for details.

Bottom Line: EU-based routing infrastructure. BYOMS keeps your data wherever you deploy your models.

Quick Answer: No. AskARC never trains on your data. We don't store your query content, and our routing models are trained on metadata patterns only.

Data Usage Policy:

Never Used for Training: Your questions, responses, or content
Routing Improvement: Anonymized metadata patterns only
Model Providers: Check their policies (most offer opt-out)
With BYOMS: Your data never reaches AskARC or model providers

Bottom Line: Your content is yours. We route, we don't train on it.

Quick Answer: AskARC is built in Europe with GDPR compliance at its core. No single model sees your full conversation, and we minimize personal data retention.

Privacy Protections:

GDPR Compliant: Built and operated in the EU (Netherlands) with European privacy standards
Conversation Privacy: No single AI model sees your full conversation history
Minimal Data Retention: Only minimal logs for audit and improvement
No Tracking: No profiling, no behavioral tracking, no selling your data
With BYOMS: Your data never leaves your infrastructure

Bottom Line: Privacy-first by design. Your data stays yours, protected by EU standards.

Quick Answer: Yes. AskARC is designed to meet EU AI Act standards and GDPR compliance. We apply filtering, human oversight, and transparent labeling to reduce risk.

Compliance Standards:

EU AI Act Aligned: Built to meet EU AI Act requirements for transparency and governance
GDPR Compliant: European privacy standards for data handling
Risk Mitigation: Content filters, human oversight, and transparent AI labeling
NOT HIPAA/SOC2 Certified: AskARC is not HIPAA or SOC2 certified. For regulated data, use BYOMS with your certified infrastructure

Important: If you need HIPAA, SOC2, or other compliance certifications, use BYOMS to keep sensitive data in your certified environment while benefiting from AskARC's routing.

Bottom Line: EU compliant for general use. For regulated industries, BYOMS lets you maintain your certifications while using our routing intelligence.

Quick Answer: AskARC is privacy-first by design. No single model sees your full conversation, no tracking, no profiling, and all data handling aligns with GDPR and EU AI Act standards.

Privacy Features:

Distributed Conversations: Your conversation is split across models — no single AI sees your complete history
No Behavioral Tracking: We don't build user profiles or track your behavior
EU Data Protection: GDPR compliance with servers in the Netherlands
Transparent Policies: Clear privacy policy, no hidden data collection
BYOMS Option: Keep all data in your infrastructure for maximum control

Bottom Line: If privacy matters to you, AskARC is built with European privacy standards from day one.

Pricing & Credits

Quick Answer: Simple tier-based pricing: Free (100 calls/month), Premium (€9.99 for 1,000 calls/month), or pay-as-you-go API pricing starting at €0.0015/call.

Pricing Tiers:

Free Tier: 100 API calls/month, €0, no credit card required
Premium Tier: 1,000 calls/month for €9.99 (exceptional value)
API Pay-As-You-Go: €0.0015/call (0-100K calls), volume discounts available
Enterprise/High Volume: Custom pricing for 25M+ calls/month

Volume Discounts (API): 100K-5M calls: €0.0010/call • 5M-25M calls: €0.0008/call

Bottom Line: Start free, scale with Premium, go API pricing for high volume. Transparent, no hidden fees.

Quick Answer: 1 API call = 1 request. Straightforward 1:1 counting regardless of request or response size.

Simple Counting: One API request equals one call, whether it's a simple question or a complex multi-turn conversation. No token counting, no size multipliers, no hidden math.

Examples:

"What is AI?" = 1 call
10,000 word essay request = 1 call
Complex code generation = 1 call
10 separate requests = 10 calls

Bottom Line: Transparent call counting. One request = one call, every time.

Quick Answer: Subscription tiers (Free, Premium) reset monthly. Pay-as-you-go API credits don't expire.

Expiration Policy:

Free Tier: 100 calls reset on the 1st of each month (unused calls don't roll over)
Premium Tier: 1,000 calls reset monthly (unused calls don't roll over)
API Pay-As-You-Go: No expiration, use at your own pace
Overage: Premium users can add API pay-as-you-go for overages

Bottom Line: Subscription calls reset monthly. Pay-as-you-go credits never expire.

Quick Answer: Free tier: 100 calls/month for testing. Premium tier: 1,000 calls/month for €9.99 with same features and access.

Tier Comparison:

Free Tier:
• 100 API calls/month
• Access to all models (ChatGPT, Claude, Gemini, Mistral)
• Intelligent routing
• Full API access
• Perfect for testing and small projects

Premium Tier (€9.99/month):
• 1,000 API calls/month (10x more)
• Same model access
• Same routing intelligence
• Same API features
• Exceptional value for production use

Bottom Line: Free tier for testing, Premium for production. Same quality, just more volume.

Quick Answer: Yes! You can upgrade to Premium at any time or downgrade at the end of your billing cycle.

Upgrading: Upgrade to Premium instantly to unlock 1,000 calls/month and access all AI models. Your new tier takes effect immediately.

Downgrading: If you downgrade, you'll keep Premium features until the end of your current billing cycle, then switch to Free tier (100 calls/month).

Bottom Line: Flexible tier switching with no penalties. Upgrade anytime, downgrade at billing cycle end.

Quick Answer: All payments are processed securely through Stripe. We accept major credit cards (Visa, Mastercard, American Express). iOS app subscriptions are processed via the Apple App Store.

Payment Methods:

Web & API: Credit cards processed securely through Stripe
iOS App: Payments via Apple App Store for iOS subscriptions
High Volume API: Invoice payment available (contact us for eligibility)

Security: All payment data is handled by Stripe, a PCI-DSS Level 1 certified payment processor. We never store your credit card information.

Bottom Line: Secure payments via Stripe for web, Apple for iOS, and invoice options for high-volume API usage.