Overview
askarc.app is an AI orchestration service operated by FenixMinds B.V., KvK 91762782, that integrates multiple third-party AI models to generate responses, summaries, and other AI outputs. This document explains which models we use, how we disclose AI involvement to users, how we label and trace AI-generated content, and the procedural and technical safeguards we maintain to meet EU AI Act transparency obligations and comparable international expectations.
Models and Third-Party Providers
- Models used: Claude (Anthropic); ChatGPT (OpenAI); Mistral; Gemini (Google).
- Role of third parties: Some outputs are produced wholly or partly by one or more of these third-party models; other components (e.g., routing, aggregation, formatting, post-processing) are performed by our proprietary orchestration layer. Where a specific model materially shapes an output we record the model identifier and version in our provenance logs. No claim is made that any single model provides exclusive authorship of composite outputs.
User Notice and Interaction Disclosure
- AI interaction notice: At first interaction and in contexts where it may not be obvious, users are informed that they are interacting with an AI system rather than a human, consistent with the EU AI Act notice obligations for interactive systems.
- Just-in-time transparency: When an AI system directly responds to a user or generates content that could be mistaken for human authorship, we provide a clear, readable label indicating AI origin before or at the moment of exposure.
Labelling and Detectability of AI Generated Content
- Human-readable labels: Outputs that are wholly or substantially AI-generated are labelled with a visible statement such as "AI generated content" and a short explanatory note about the model family used where feasible.
- Machine-readable markers: Where technically feasible we attach machine-readable metadata to generated text (and other media) to indicate artificial origin, model identifier, generation timestamp, and a minimal provenance token to assist detection and downstream verification, aligning with Article 50 obligations to mark synthetic content in machine-readable formats.
- Limitations: Marking and watermarking techniques have practical constraints; we apply state-of-the-art methods and periodically review technical standards and the emerging Code of Practice for transparent generative AI systems.
Provenance, Logging and Traceability
- Provenance records: For each generated output we log the model(s) invoked, model version where available, the prompt (subject to privacy constraints), the timestamp, and any post-processing steps. Logs are retained under our retention policy and are accessible to authorised staff for compliance, audit, and incident investigation.
- Traceability purpose: These records support audits, incident response, user requests to understand output origin, and regulatory conformity assessments required for GPAI and other in-scope systems.
- Access and disclosure: We will disclose model provenance to competent authorities and, where legally required, to affected users; requests for specific provenance information can be directed to privacy@fenxlabs.ai.
High-Level Summaries for General Purpose AI Models
GPAI summary: For general-purpose models integrated into askarc.app we publish concise summaries describing high-level model characteristics, intended capabilities, and known limitations sufficient for downstream deployers and users to assess risks, in line with provider responsibilities under the AI Act transparency framework. Summaries include training data categories at a summary level (e.g., publicly available text corpora, licensed data, human-annotated examples) and known limitations such as potential factual inaccuracies, bias risks, and inability to guarantee up-to-date factual correctness.
Human Oversight and Risk Mitigation
- Human-in-the-loop: For use cases where outputs may materially affect users' legal, financial or safety interests we apply human review, escalation workflows, and require explicit user confirmation before actions are taken.
- Harm reduction: We implement content filters, safety classifiers and escalation rules to reduce mis-use, disallowed content generation, and to mitigate known bias or harmful outputs. These measures are proportionate to assessed risks and are documented for audit and regulatory review.
Relationship with Privacy and Data Protection
Privacy interface: Provenance logging and prompt retention are implemented in alignment with our Privacy Policy and GDPR obligations; personal data in prompts is minimised, pseudonymised or redacted where possible, and processed on the lawful bases set out in our Privacy Policy. Requests about specific data handling or transfer mechanisms may be sent to privacy@fenxlabs.ai.
International and Regulatory Alignment
- AI Act alignment: This documentation is intended to meet the disclosure, marking and traceability elements of Article 50 and related transparency obligations for providers and deployers of AI systems; we follow guidance and evolving technical standards promoted by the Commission and the forthcoming Code of Practice on transparent generative AI systems.
- Global approach: Beyond EU obligations we monitor regulatory regimes in other jurisdictions and adapt notices, labelling, and age-appropriate safeguards (for example COPPA and applicable US state rules) where required to maintain lawful deployment.
User Rights, Requests and Contact
- User rights: Users may request information about whether content presented to them was AI-generated and request available provenance details for specific outputs; such requests are handled in accordance with our Privacy Policy and applicable law.
- Contact for transparency or provenance requests: privacy@fenxlabs.ai. For regulatory or supervisory inquiries we will cooperate with competent authorities and provide requested documentation subject to legal process.