Breikin Child Safety Policy

Last Updated: January 21, 2025

1. Introduction

At Breikin, we are committed to creating a safe and supportive environment for all users, with special attention to the safety of children and minors who may use our personal development service. This Child Safety Policy outlines our approach to protecting minors and preventing harmful or inappropriate content or interactions within our life improvement platform.

2. Age Requirements

Breikin is intended for users who are 13 years of age or older. Users between the ages of 13 and 18 should have parent or guardian consent to use the service. We implement an age verification system during the sign-up process to help enforce these requirements.

The following age restrictions apply:

  • Users under 13 years of age are not permitted to use Breikin.
  • Users between 13-18 years require parental consent.
  • Certain features may have additional age restrictions clearly indicated within the app.
  • Goal tracking and habit formation features are designed with age-appropriate content and expectations.

3. Prohibited Content and Behavior

Breikin strictly prohibits content and behavior that may harm, exploit, or endanger children. The following are explicitly prohibited on our platform:

  • Child Sexual Exploitation: Any content or behavior related to child sexual abuse, exploitation, or inappropriate interactions with minors.
  • Grooming: Attempts to establish inappropriate relationships with minors or manipulate them for any exploitative purpose.
  • Sextortion: Threatening or coercing minors for sexual content or favors.
  • Trafficking: Any attempt to traffic, trade, or exploit minors.
  • Harmful Challenges: Promoting, encouraging, or sharing challenges that may cause physical or psychological harm to minors.
  • Bullying and Harassment: Content or behavior intended to harass, intimidate, or bully others, particularly minors.
  • Hate Speech: Content promoting discrimination, hatred, or violence against any individual or group based on attributes such as race, ethnicity, gender, religion, disability, or sexual orientation.
  • Self-Harm Promotion: Content that promotes, encourages, or glorifies self-harm, suicide, or eating disorders.
  • Dangerous Goals: Goal suggestions or encouragement for activities that could be harmful to minors' physical or mental health.

4. Content Moderation and Safety Measures

To ensure a safe environment, particularly for younger users, we implement the following safety measures:

  • Content Filtering: All AI-generated content from life category avatars is filtered to prevent inappropriate or harmful material.
  • Safety Mode: Default setting that ensures all avatar interactions and motivational content are appropriate and constructive.
  • Age-Appropriate Goals: Avatar recommendations for goals and habits are tailored to be appropriate for the user's age group.
  • Positive Reinforcement: All motivational content focuses on healthy, positive development and achievement.
  • Moderation System: User-generated content and goals are subject to both automated and human moderation.
  • Reporting Tools: Easy-to-use reporting mechanisms for users to flag inappropriate content or behavior.
  • Educational Content: Clear information about healthy goal-setting and personal development practices.

5. Life Category Avatar Safety

Our AI-powered life category avatars (Health, Career, Relationships, etc.) are specifically designed with safety in mind:

  • Age-Appropriate Guidance: Avatar advice and motivational content is tailored to be appropriate for younger users.
  • Healthy Goal Suggestions: Avatars only suggest goals and habits that promote positive development and well-being.
  • Mental Health Awareness: Avatars are programmed to recognize signs of distress and provide appropriate resources.
  • Professional Disclaimers: Clear information that avatar advice is for motivational purposes and not professional medical, therapeutic, or counseling advice.
  • Crisis Prevention: Safeguards to detect and respond appropriately to concerning content or behavior.

6. Reporting Mechanisms

We encourage all users to report content or behavior that violates our Child Safety Policy. Reports can be made through:

  • In-app reporting tools accessible from any content or user interaction
  • Email to safety@breikin.com
  • Contact form on our website

All reports are taken seriously and investigated promptly. Depending on the severity of the violation, we may take appropriate action, including content removal, account suspension, or reporting to relevant authorities.

7. Compliance with Laws

Breikin complies with all applicable laws regarding child protection, including:

  • Children's Online Privacy Protection Act (COPPA)
  • Applicable state and international regulations regarding minors' online safety
  • Mandatory reporting requirements for child abuse or exploitation
  • Data protection regulations concerning minors' personal information

We cooperate fully with law enforcement in cases involving child safety and may report serious violations to appropriate authorities.

8. Education and Resources

We provide resources to help parents, guardians, and younger users understand online safety and healthy personal development:

  • In-app safety guides and educational content
  • Links to external resources on digital wellbeing and online safety
  • Clear explanations about AI-generated content and how life category avatars work
  • Guidelines for healthy goal-setting and habit formation
  • Resources for parents on supporting their children's personal development journey

9. Updates to This Policy

We may update this Child Safety Policy from time to time. We will notify users of any significant changes by posting the new policy on this page and updating the "Last Updated" date.

10. Contact Us

If you have questions or concerns about our Child Safety Policy, please contact us at:

safety@breikin.com