Disruptive Rain
Disruptive Rain
Trust & Transparency

Transparency buildstrust.

User trust is important to us. We are dedicated to being transparent about government data requests, child safety efforts, content moderation practices, and how our AI systems work.

Regular Reports
Government Requests
Child Safety
Content Moderation
Our Commitments

Committed to openness.

We believe transparency is essential for building and maintaining user trust.

Regular Reporting

We publish transparency reports on government data requests, child safety, and content moderation.

Privacy Protection

We carefully evaluate all requests seeking user data to protect privacy and ensure legal compliance.

User Notification

When enforcement action is taken, users are notified with details and reasons for the decision.

Appeal Process

Users can appeal moderation decisions through our support channels.

Transparency Reports

Regular reporting.

We publish regular reports to keep you informed about our practices.

Government Data Requests

Semi-Annual

H2 2025

Information about requests received from law enforcement and government agencies.

Child Safety Report

Semi-Annual

H2 2025

Details on our child safety reporting and protection measures.

Content Moderation Report

Quarterly

Q4 2025

Statistics on content moderation actions and enforcement.

AI Safety Evaluation

Quarterly

Q4 2025

Results from our safety evaluations and red-teaming exercises.

Looking for a specific report? Contact our transparency team.

transparency@disruptiverain.com

Government Data Requests

We carefully evaluate all requests from law enforcement and government agencies seeking user data. Privacy and legal compliance guide our response process.

Careful Evaluation

Every request is reviewed by our legal team to ensure it meets applicable legal standards.

Narrow Scope

We seek to narrow the scope of requests and push back on overly broad demands.

User Notification

When legally permitted, we notify affected users about government requests.

Transparency Reporting

We publish aggregate statistics on government requests received.

Content Moderation

Fair enforcement.

Our approach to content moderation balances safety with user expression, applying consistent standards across our platform.

Policy Enforcement

Clear policies define what content and behavior is not allowed on our platform.

Proactive Detection

Automated systems detect policy violations before they cause harm.

Human Review

Human reviewers make final decisions on complex moderation cases.

Consistent Standards

Moderation decisions are made consistently according to published guidelines.

Appeals Process

If you believe your content was incorrectly moderated, you can appeal the decision.

appeals@disruptiverain.com
Child Safety

Protecting children.

Child safety is a critical focus of our trust and safety efforts.

Age Verification

Users must be 18 or older, or 13+ with parental approval.

CSAM Detection

We deploy technology to detect and report child sexual abuse material.

Reporting to NCMEC

We report to the National Center for Missing & Exploited Children as required by law.

Enhanced Protections

Additional safeguards for younger users including content filtering.

Openness

Beyond reports.

Transparency extends beyond formal reports to how we operate and engage with the world.

Clear Documentation

We publish detailed documentation about our AI systems, their capabilities, and their limitations.

Safety Evaluations

We share results from safety evaluations and explain how we test our systems.

Industry Collaboration

We work with other AI companies, researchers, and policymakers to develop shared safety standards.

Community Engagement

We welcome feedback from researchers, civil society, and the public.

Questions about transparency?

We welcome inquiries from researchers, policymakers, journalists, and the public about our transparency practices.