Safety is ourfoundation.
We believe powerful AI must be safe AI. Our commitment to safety guides every decision, from research to deployment, ensuring our technology benefits humanity.
Safety from the start.
Like any powerful technology, AI comes with real risks. We work to ensure safety is built into our systems at all levels—from research and training to deployment and monitoring.
Before Release
Rigorous testing, red-teaming, and external expert review before any system goes live.
During Deployment
Continuous monitoring, automatic safeguards, and rapid response to emerging issues.
Continuous Improvement
Learning from real-world use to strengthen safety measures over time.
Guiding principles.
Four principles guide our approach to AI safety.
Scientific Approach
We treat safety as a science, embracing uncertainty and learning from iterative deployment rather than relying on theoretical assumptions alone.
Defense in Depth
Multiple layers of safety interventions create redundancy. If one safeguard fails, others provide protection.
Methods That Scale
We prioritize safety techniques that become more effective as our models become more capable, not less.
Human Control
AI should elevate humanity and support democratic ideals. Critical decisions always involve meaningful human oversight.
Tracking advanced risks.
Our preparedness framework defines clear thresholds for capability levels and the safeguards required at each stage.
Normal operations with routine safety monitoring and standard safeguards in place.
- Continuous automated monitoring
- Regular safety evaluations
- Standard deployment procedures
Systems that could amplify existing pathways to harm require additional safeguards before deployment.
- Enhanced safety review required
- Additional safeguards implemented
- Elevated monitoring protocols
Systems that could introduce unprecedented new pathways to harm require safeguards during both development and deployment.
- Board-level oversight required
- Development safeguards mandatory
- External safety audit required
Rigorous testing.
Comprehensive evaluations ensure our models meet the highest safety standards.
Policy Compliance
Evaluating that models don't comply with requests that violate our usage policies.
Adversarial Testing
Testing resistance to jailbreaks and adversarial prompts designed to circumvent safety measures.
Factual Accuracy
Measuring hallucination rates and ensuring models acknowledge uncertainty appropriately.
Instruction Hierarchy
Verifying models properly prioritize system instructions over potentially harmful user requests.
Bias Detection
Continuous monitoring for demographic and content biases across all model outputs.
Fairness Audits
Regular third-party audits to ensure fair and equitable outcomes across user groups.
Structured oversight.
Multiple layers of governance ensure safety decisions receive proper review and accountability.
Safety Advisory Group
Cross-functional team that reviews capability reports and makes recommendations ahead of deployment.
Safety & Security Committee
Independent board-level committee overseeing critical safety and security decisions.
External Review Board
External AI safety experts provide independent oversight and recommendations.
Protecting Young Users
One critical focus of our safety efforts is protecting children and teens. We take a multi-layered approach to youth safety across all our products.
Age Requirements
Users must be 18 or older, or 13+ with parental approval, to use our AI tools.
Enhanced Protections
Multi-layered approach to teen safety including product safeguards and family support.
Content Filtering
Age-appropriate content filtering and stricter safety measures for younger users.
Learn more about our practices.
Explore our comprehensive approach to safety, security, and transparency.
Questions about AI safety?
Our safety team welcomes inquiries from researchers, policymakers, and the public. We're committed to transparency about our practices.