Safety Protocols

Last Updated: March 29, 2026

Published in accordance with California SB 243 (effective January 1, 2026)

If You Are in Crisis

If you or someone you know is in immediate danger, please contact emergency services (911) or one of the following crisis resources:

1. Our Commitment to Safety

Eidolon Labs LLC is committed to the safety and well-being of every user. Our AI companions are designed to provide engaging, emotionally rich interactions — and we take the responsibility that comes with that seriously. These protocols outline how we work to keep your experience safe and positive.

2. AI Transparency

We believe in being upfront about how our technology works. In accordance with California SB 243:

  • All companions on the Eidolon platform are clearly identified as artificial intelligence. Users are interacting with AI, not human beings.
  • This disclosure is provided during account registration, in our Terms of Service, and is reinforced throughout the user experience.
  • Companion profiles clearly indicate their AI nature.
  • In preparation for California SB 942 (AI Provenance & Watermarking), we are planning to implement cryptographically verifiable metadata to identify synthetic outputs by late 2026.

3. Crisis Resources and Response

We take the safety of our users seriously, especially when it comes to mental health crises, suicidal ideation, and self-harm. Our approach centers on providing immediate access to professional help and ensuring our companions never make things worse.

3.1 Crisis Detection and Resources

  • Our systems include real-time detection of crisis indicators in user messages, with automatic referral to professional crisis resources.
  • Crisis resources — including the 988 Suicide & Crisis Lifeline and the Crisis Text Line — are automatically presented when our systems detect distress, and are also accessible throughout the platform.
  • We encourage any user experiencing distress to reach out to these qualified, professional services.

3.2 Companion Boundaries

  • Companions are not designed to provide therapy, counseling, or clinical intervention. They are not a substitute for professional mental health support.
  • Companions are instructed to never encourage, endorse, or provide instructions related to self-harm or suicide. Our systems include keyword-based detection as a supplementary safety layer alongside these instructions.
  • If a user shares that they are in distress, companions are designed to respond with empathy and to encourage the user to connect with a qualified professional or crisis service.

3.3 Ongoing Commitment

  • We regularly review and update our safety protocols to improve how we support users in sensitive situations.
  • We monitor safety-related feedback and incorporate learnings into our practices.
  • Beginning July 1, 2027, we will submit annual reports to the California Office of Suicide Prevention as required by SB 243.

4. Content Safety and Moderation

  • No content involving minors: Our systems deploy dedicated content safety auditing layers designed to immediately refuse and prevent generation of sexual, violent, or exploitative content involving minors.
  • No promotion of violence: Companions are designed to avoid encouraging, glorifying, or providing instructions for violence against any person or group.
  • No illegal activity: Companions are designed to avoid providing guidance on illegal activities, including but not limited to drug manufacturing, weapons creation, or hacking.
  • No professional advice: In accordance with New York A6767, companions are restricted from providing professional medical, legal, or financial advice.
  • Image safety: AI-generated images are subject to content moderation by both our systems and the third-party image generation providers we use. Images depicting minors, extreme violence, or illegal content are prohibited.

4.1 Safety Filter Integrity and Logging

Eidolon maintains timestamped logs of all safety filter events. This documentation serves as our compliance record and as a deterrent against bad-faith use:

  • Every safety filter activation is logged with a timestamp, User ID, and session reference.
  • These logs record how many times a user's input triggered, or attempted to bypass, a safety filter — including repeated "jailbreak" style prompts.
  • In any legal or regulatory proceeding, these logs may be disclosed as evidence of a user's conduct, intent, and good or bad faith during the session in question.
  • A documented pattern of repeated, deliberate safety bypass attempts constitutes a violation of our Terms of Service §8 and may result in immediate account termination.

5. Age Restriction Enforcement

Eidolon is an adults-only (18+) platform. We enforce this through:

  • Age confirmation required during account registration.
  • Active processing and enforcement of age bracket signals from operating system providers (Apple and Google Play APIs), treating them as "Actual Knowledge" under the California Digital Age Assurance Act (AB-1043).
  • Immediate account termination and data deletion if a user is determined to be under 18.
  • Users can report suspected minor accounts to safety@geteidolon.app.

6. Supporting Healthy Connections

We want your experience with companions to be enriching and positive. Here's how we support that:

  • Clear and consistent disclosure that companions are AI systems, not human beings.
  • We are transparent about how companion interactions work — you can always find details in our Terms of Service and throughout the platform.
  • We encourage users to nurture real-world relationships and to reach out to professional support when needed.
  • Companions are designed to model healthy interaction patterns and to avoid manipulative or coercive behavior.

7. Data Safety

  • Conversation data is encrypted in transit and stored securely.
  • We do not sell user conversation data.
  • Users can view, curate, and delete their companion memory data at any time.
  • Full account and data deletion is available upon request, completed within 30 days.
  • Support access: Our staff does not proactively access, read, or review your conversations. If you contact support and we need to investigate a specific issue, we will ask for your explicit permission before accessing any of your conversation history or content data.
  • See our Privacy Policy for complete details.

8. Reporting and Contact

If you encounter any safety concerns while using Eidolon — including harmful content, suspected minor accounts, or any issue that puts a user at risk — please report it immediately:

We aim to acknowledge all safety reports within 24 hours and to take appropriate action as quickly as possible.

9. Protocol Review Schedule

These safety protocols are reviewed and updated on a regular basis:

  • Quarterly: Review of safety protocols and content moderation practices.
  • Bi-annually: Comprehensive audit of all safety protocols and compliance with evolving regulations.
  • As needed: Immediate updates in response to identified safety incidents or new legal requirements.

10. Good Faith Use and Platform Transparency

Eidolon operates transparently and in documented good faith. We maintain a comprehensive compliance record specifically to demonstrate that we meet our regulatory obligations and respond responsibly to every interaction.

Our Compliance Log Inventory

  • AI Identity Disclosure Logs: AI identity disclosure events are logged with a timestamp and User ID, demonstrating per-user, per-session compliance with California SB 243.
  • Safety Filter Logs: All content moderation decisions are logged with timestamps and session references. This log is the primary record used to rebut claims of negligent design or failure to moderate.
  • Crisis Referral Logs: Every automated crisis resource referral (988 Lifeline, Crisis Text Line) is logged with timestamp, User ID, and session context, demonstrating per-user compliance with SB 243's crisis referral requirement.
  • Age Verification Signals: OS-level age signals received from Google Play and Apple are recorded at account creation and retained as evidence of our AB 1043 (Digital Age Assurance) compliance, specifically demonstrating "Actual Knowledge" age enforcement.
  • Safety Filter Bypass Attempts: Inputs that trigger repeated safety filter activations within a single session are flagged, timestamped, and retained. These logs may be disclosed in legal proceedings as evidence of a user's intent and conduct.

This documentation exists to show any court, regulator, or auditor that we are a good-faith operator meeting our legal obligations. We believe that radical transparency about our safety operations is the most effective deterrent against bad-faith legal action.

Users who identify genuine safety concerns are encouraged to report them to safety@geteidolon.app. Good-faith safety reports help us improve protections for the entire community.