Safety at Every Step
Building AI that is safe, aligned, and beneficial through continuous learning and real-world deployment
Safety as a Science
At TetiAI, we treat safety not as a destination but as an ongoing scientific process. Named after the protective goddess Thetis, we believe that just as she adapted to protect her son Achilles, AI safety must evolve with each advancement in capabilities.
We embrace uncertainty, learning from each deployment rather than relying solely on theoretical principles. This empirical approach helps us understand real-world impacts and continuously improve our safety measures.
Our Three-Step Safety Process
Teach
We start by teaching Teti right from wrong through constitutional AI methods, filtering harmful content, and training it to respond with empathy and wisdom.
Test
We conduct rigorous internal evaluations and collaborate with external experts to test real-world scenarios, continuously enhancing our safeguards.
Learn
We use real-world feedback and deployment data to make Teti safer and more helpful, treating each interaction as a learning opportunity.
Core Safety Principles
Iterative Deployment
We deploy Teti's capabilities incrementally, allowing society to adapt while we learn from real-world usage. Each version builds on lessons from the previous one.
Why it matters: AGI development is continuous, not a single leap. By deploying iteratively, we ensure safety lessons come from actual experience rather than speculation.
Defense in Depth
No single intervention is sufficient. We layer multiple defenses—from training to monitoring to human oversight—ensuring all would need to fail for a safety incident to occur.
- Training-time safety measures and value alignment
- Real-time monitoring and anomaly detection
- Human review and override capabilities
- External red teaming and security audits
Methods That Scale
We develop alignment techniques that become more effective as Teti becomes more capable. Our safety measures must scale with intelligence, not break under it.
Example: Teti can critique its own outputs, helping identify flaws that humans might miss. As it becomes smarter, its ability to ensure its own safety improves.
Human-Centered Control
Humans remain in control with meaningful oversight. Teti elevates human capabilities rather than replacing human judgment, with democratic input shaping its behavior.
- Policy-driven alignment with transparent guidelines
- Public input on model behavior and values
- Clear audit trails and explainable decisions
Proactive Risk Assessment
We identify and mitigate risks before they materialize, using our Preparedness Framework to evaluate capabilities and implement safeguards proactively.
We categorize risks into three areas: Human misuse (harmful applications),Misaligned AI (unintended behaviors), and Societal disruption(rapid change effects).
Rigorous Measurement
Pre-Deployment
- • Capability assessments
- • Risk evaluations
- • External red teaming
- • Safety benchmark testing
- • Adversarial robustness checks
Post-Deployment
- • Continuous monitoring
- • User feedback analysis
- • Incident tracking
- • Performance metrics
- • Iterative improvements
Community Effort
Safety is a shared responsibility. We actively collaborate with researchers, policymakers, and the public to advance the field together.
Open Research
Publishing our safety methods and findings for peer review
Shared Standards
Contributing to industry-wide safety protocols and benchmarks
Public Input
Incorporating democratic feedback into Teti's development
Building the Future, Safely
Join us in developing AI that benefits humanity while prioritizing safety at every step.