Safety

How we approach harmful content, abuse, and platform integrity across communities and AI-assisted features.

Draft — pending legal review

Binding policies, escalation paths, and jurisdictional carve-outs must be confirmed with counsel before this page is treated as canonical.

Community safety (draft)

Operators remain responsible for their community rules. Key AI provides tooling to moderate, report, and escalate issues consistent with our Terms and Community Guidelines.

We reserve the right to investigate credible reports of harm, fraud, or illegal activity and to take enforcement actions where required by law or policy.

AI-assisted experiences (draft)

Kai and related features are designed with consent and professional context in mind. Automated outputs should be reviewed by humans before high-risk decisions.

We will publish model cards, evaluation summaries, and incident response commitments in the AI Transparency page after review.

Reporting (draft)

Use the in-product reporting flows where available. For marketing-site specific concerns, route through Help until dedicated intake is finalized.